LRU 的全称是 Least recently used,是缓存替换策略的一种。电脑存储器空间的大小固定,无法容纳服务器上所有的文件,所以当有新的文件要被置换入缓存时,必须根据一定的原则来取代掉适当的文件。此原则即所谓缓存替换策略。
传送门
实现 LRU 缓存需要用到 kv 类型和线型的数据结构,用来做到 O(1) 的查找效率和实现淘汰机制。我们可以用 HashMap 和一个双向链表。其中用双向链表的好处是:当要删除的节点为 node 的时候,通过 node.pre
和 node.next
可以快速找到前后节点,从而在链表中去掉这个节点。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
| public class LRUCache { class LinkNode { int key; int value; LinkNode prev; LinkNode next; }
private void addNode(LinkNode node) { node.prev = head; node.next = head.next;
head.next.prev = node; head.next = node; }
private void removeNode(LinkNode node){ LinkNode prev = node.prev; LinkNode next = node.next;
prev.next = next; next.prev = prev; }
private void moveToHead(LinkNode node){ removeNode(node); addNode(node); }
private LinkNode popTail() { LinkNode res = tail.prev; removeNode(res); return res; }
private HashMap<Integer, LinkNode> cache = new HashMap<Integer, LinkNode>(); private int size; private int capacity; private LinkNode head, tail;
public LRUCache(int capacity) { this.size = 0; this.capacity = capacity;
head = new LinkNode(); tail = new LinkNode();
head.next = tail; tail.prev = head; }
public int get(int key) { LinkNode node = cache.get(key); if (node == null) return -1;
moveToHead(node);
return node.value; }
public void put(int key, int value) { LinkNode node = cache.get(key);
if(node == null) { LinkNode newNode = new LinkNode(); newNode.key = key; newNode.value = value;
cache.put(key, newNode); addNode(newNode);
++size;
if(size > capacity) { LinkNode tail = popTail(); cache.remove(tail.key); --size; } } else { node.value = value; moveToHead(node); } } }
|
一种更为优雅的实现是让这个双向链表成环状,初始化为 value 都为 -1,大小为 capacity。每当有 get 和 put 操作的时候就调整环。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
| class LRUCache { private class Node { int key, val; Node next, pre; Node(int key, int val, Node pre, Node next) { this.key = key; this.val = val; this.pre = pre; this.next = next; } }
private Node head = new Node(-1, -1, null, null); private HashMap<Integer, Node> map = new HashMap<>();
private void move2Head(Node cur){ if (cur == head) { head = head.pre; return; } cur.pre.next = cur.next; cur.next.pre = cur.pre;
cur.next = head.next; cur.next.pre = cur; head.next = cur; cur.pre = head; }
LRUCache(int capacity){ Node cur = head; for (int i = 0; i < capacity - 1; i++) { cur.next = new Node(-1, -1, cur, null); cur = cur.next; } cur.next = head; head.pre = cur; }
public int get(int key) { if (!map.containsKey(key)) return -1; Node node = map.get(key); move2Head(node); return node.val; }
public void put(int key, int val) { if (map.containsKey(key)) { Node node = map.get(key); node.val = val; move2Head(node); } else { if (head.val != -1) map.remove(head.key); head.key = key; head.val = val; map.put(key, head); head = head.pre; } } }
|
此外,Java 中的 LinkedHashMap 可以直接实现 LRU 缓存。:)
本文由
Razertory's Blog 版权所有。如若发现有误,欢迎指正(https://t.me/razertory)。如若转载,请注明出处。原文地址
https://razertory.me/2020/02/16/lru-cache/