Master LRU Cache: O(1) Implementation with Hash-Linked List in Java
Learn how to design and implement an O(1) LRU (Least Recently Used) cache in Java using a combined hash map and doubly linked list, covering algorithm concepts, data structure choices, method details, and complete code with examples and explanations.
Introduction
LRU (Least Recently Used) is a common cache eviction strategy that removes the least recently accessed items when the cache is full, assuming recently used data is more valuable.
Algorithm Description
Implement an LRUCache class with the following methods:
LRUCache(int capacity): initialize the cache with a positive capacity.
int get(int key): return the value associated with
keyif it exists; otherwise return -1.
void put(int key, int value): insert or update the key-value pair; if the cache exceeds its capacity, remove the least recently used entry before inserting.
Both
getand
putmust operate in
O(1)time.
Example
<code>/* Cache capacity is 2 */
LRUCache cache = new LRUCache(2);
cache.put(1, 1); // cache = [(1,1)]
cache.put(2, 2); // cache = [(2,2), (1,1)]
cache.get(1); // returns 1, cache = [(1,1), (2,2)]
cache.put(3, 3); // evicts key 2, cache = [(3,3), (1,1)]
cache.get(2); // returns -1 (not found)
cache.put(1, 4); // updates key 1, cache = [(1,4), (3,3)]
</code>Algorithm Design
To achieve
O(1)operations, the cache must support fast lookup, insertion, deletion, and maintain order. A hash table provides fast lookup, while a doubly linked list maintains order and allows constant‑time insertion and deletion. Combining them yields a "hash‑linked list".
The core data structure consists of a doubly linked list and a hash map.
The hash map gives O(1) access to nodes, and the doubly linked list preserves usage order, enabling quick removal of the least recently used node.
Code Implementation
Define a node class for the doubly linked list:
<code>class DLinkedNode {
int key;
int value;
DLinkedNode pre;
DLinkedNode next;
public DLinkedNode() {}
public DLinkedNode(int key, int value) {
this.key = key;
this.value = value;
}
}</code>Implement the LRUCache class:
<code>class LRUCache {
private Map<Integer, DLinkedNode> cache = new HashMap<>();
private int size;
private int capacity;
private DLinkedNode head, tail; // pseudo head and tail
public LRUCache(int capacity) {
this.size = 0;
this.capacity = capacity;
head = new DLinkedNode();
tail = new DLinkedNode();
head.next = tail;
tail.pre = head;
}
public int get(int key) {
DLinkedNode node = cache.get(key);
if (node == null) return -1;
moveToFirst(node);
return node.value;
}
public void put(int key, int value) {
DLinkedNode node = cache.get(key);
if (node != null) {
node.value = value;
moveToFirst(node);
} else {
if (size == capacity) {
DLinkedNode tailNode = removeLast();
cache.remove(tailNode.key);
size--;
}
DLinkedNode newNode = new DLinkedNode(key, value);
cache.put(key, newNode);
addFirst(newNode);
size++;
}
}
private void addFirst(DLinkedNode node) {
node.pre = head;
node.next = head.next;
head.next.pre = node;
head.next = node;
}
private void moveToFirst(DLinkedNode node) {
remove(node);
addFirst(node);
}
private void remove(DLinkedNode node) {
node.pre.next = node.next;
node.next.pre = node.pre;
}
private DLinkedNode removeLast() {
DLinkedNode ans = tail.pre;
remove(ans);
return ans;
}
private int getSize() {
return size;
}
}
</code>Summary and Key Points
The LRU cache combines a doubly linked list (for ordered elements and O(1) insertion/deletion) with a hash map (for O(1) lookup).
Storing both key and value in the list nodes is essential because removal of the tail node also requires deleting its key from the hash map.
Using a pseudo head and tail simplifies edge‑case handling when inserting or removing nodes.
Reference: LeetCode LRU Cache solution
Architect's Guide
Dedicated to sharing programmer-architect skills—Java backend, system, microservice, and distributed architectures—to help you become a senior architect.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.