Fundamentals 9 min read

Understanding LRU Cache and Its Implementation in Java and Redis

This article explains the LRU (Least Recently Used) caching principle, shows how to build an O(1) LRU cache using a HashMap and doubly linked list in Java, and describes Redis's approximate LRU eviction algorithm based on a global clock and random sampling.

Selected Java Interview Questions
Selected Java Interview Questions
Selected Java Interview Questions
Understanding LRU Cache and Its Implementation in Java and Redis

LRU (Least Recently Used) is a cache eviction strategy that removes the least recently accessed items when memory is limited, commonly used in operating systems, CPU caches, and application-level caches.

To achieve O(1) insertion and removal, a typical design combines a HashMap for fast key lookup with a doubly linked list that maintains access order; new or accessed nodes are moved to the head, and the tail node is evicted when capacity is exceeded.

Below is a Java implementation that defines a DLinkedNode class and an LRUCache class. The cache stores nodes in a Hashtable<Integer, DLinkedNode> , updates the linked list on get and set , and evicts the tail node when necessary.

class DLinkedNode {
    String key;
    int value;
    DLinkedNode pre;
    DLinkedNode post;
}

public class LRUCache {
    private Hashtable
cache = new Hashtable<>();
    private int count;
    private int capacity;
    private DLinkedNode head, tail;
    public LRUCache(int capacity) { /* initialization */ }
    public int get(String key) { /* move to head and return value */ }
    public void set(String key, int value) { /* add or update node, evict if needed */ }
    private void addNode(DLinkedNode node) { /* insert after head */ }
    private void removeNode(DLinkedNode node) { /* unlink node */ }
    private void moveToHead(DLinkedNode node) { /* remove then add */ }
    private DLinkedNode popTail() { /* remove tail node */ }
}

Redis implements an approximate LRU algorithm to reduce memory overhead. It maintains a global 24‑bit LRU clock ( server.lruclock ) updated every REDIS_LRU_CLOCK_RESOLUTION seconds (default 1 s). Each object stores its last access time in the lru field.

#define REDIS_LRU_BITS 24
unsigned lruclock:REDIS_LRU_BITS; /* Clock for LRU eviction */

The function estimateObjectIdleTime computes how long an object has been idle, handling clock wrap‑around. Redis samples a configurable number of random keys ( server.maxmemory_samples , default 5), estimates their idle times, and evicts the key with the highest idle time. This sampling approach approximates true LRU while keeping overhead low.

unsigned long estimateObjectIdleTime(robj *o) {
    if (server.lruclock >= o->lru) {
        return (server.lruclock - o->lru) * REDIS_LRU_CLOCK_RESOLUTION;
    } else {
        return ((REDIS_LRU_CLOCK_MAX - o->lru) + server.lruclock) * REDIS_LRU_CLOCK_RESOLUTION;
    }
}

Redis supports two LRU‑related eviction policies: volatile-lru (only keys with an expiration) and allkeys-lru (all keys). During eviction, Redis iterates over server.maxmemory_samples random keys, computes their idle times, and selects the best candidate for removal.

In summary, while LRU is a simple concept, production systems like Redis adopt approximations and sampling to balance space efficiency and performance.

JavaAlgorithmRedisCachingHashMapLRU CacheDoubly Linked List
Selected Java Interview Questions
Written by

Selected Java Interview Questions

A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.