Databases 13 min read

Understanding Redis LRU and LFU Cache Eviction Algorithms

This article explains the principles, implementation details, and trade‑offs of Redis’s LRU and LFU cache eviction algorithms, including their data structures, code snippets, configuration parameters, and practical guidance on choosing the appropriate strategy based on workload characteristics.

Architect
Architect
Architect
Understanding Redis LRU and LFU Cache Eviction Algorithms

Redis is an in‑memory high‑performance NoSQL database that can handle tens of thousands of read/write requests per second. Because memory is limited, Redis implements a set of cache eviction policies to control memory usage.

Since version 4.0, Redis provides eight eviction policies, four of which are based on LRU (Least Recently Used) or LFU (Least Frequently Used) algorithms. This article focuses on the LRU and LFU implementations in Redis, describing their principles, code, and pros/cons.

Redis LRU Implementation

The classic LRU algorithm evicts the data that has not been accessed for the longest time. In Redis, LRU is approximated: a random sample of keys is taken and sorted according to their last access time, reducing the overhead of exact tracking.

Redis stores keys in a hash table and does not maintain a dedicated doubly‑linked list for LRU. This design choice is driven by three factors: random sampling eliminates the need for full sorting, performance considerations avoid per‑access data movement, and memory constraints keep the data structures compact.

#define LRU_BITS 24
typedef struct redisObject {
    unsigned type:4;
    unsigned encoding:4;
    unsigned lru:LRU_BITS; /* LRU time (relative to global lru_clock) or
                               * LFU data (least significant 8 bits frequency
                               * and most significant 16 bits access time). */
    int refcount;
    void *ptr;
} robj;

The LRU clock is stored in seconds and can be adjusted via the LRU_CLOCK_RESOLUTION macro; its update frequency is also tied to server.hz .

unsigned int LRU_CLOCK(void) {
    unsigned int lruclock;
    if (1000/server.hz <= LRU_CLOCK_RESOLUTION) {
        atomicGet(server.lruclock,lruclock);
    } else {
        lruclock = getLRUClock();
    }
    return lruclock;
}

Because the LRU field occupies only 24 bits, the clock wraps after about 194 days, which can cause the LRU eviction to become ineffective for very long‑running instances.

LRU Drawbacks

LRU only considers the recency of accesses, ignoring how frequently a key is used. Consequently, hot data that is accessed often but not recently may be evicted, which is undesirable for many workloads.

Redis LFU Implementation

LFU (Least Frequently Used) evicts keys that have been accessed the fewest times, adjusted by a decay function to give recent accesses more weight. Redis reuses the same lru field, splitting the 24 bits into an 8‑bit counter (frequency) and a 16‑bit timestamp.

#define LFU_INIT_VAL 5
/* Logarithmically increment a counter. The greater is the current counter value
 * the less likely is that it gets really implemented. Saturate it at 255. */
uint8_t LFULogIncr(uint8_t counter) {
  if (counter == 255) return 255;
  double r = (double)rand()/RAND_MAX;
  double baseval = counter - LFU_INIT_VAL;
  if (baseval < 0) baseval = 0;
  double p = 1.0/(baseval*server.lfu_log_factor+1);
  if (r < p) counter++;
  return counter;
}

The counter is updated probabilistically: low values increase almost every access, while higher values increase with decreasing probability, eventually saturating at 255.

unsigned long LFUDecrAndReturn(robj *o) {
    unsigned long ldt = o->lru >> 8;
    unsigned long counter = o->lru & 255;
    unsigned long num_periods = server.lfu_decay_time ? LFUTimeElapsed(ldt) / server.lfu_decay_time : 0;
    if (num_periods)
        counter = (num_periods > counter) ? 0 : counter - num_periods;
    return counter;
}

Two configurable parameters— lfu_log_factor and lfu_decay_time —control the increment probability and the decay speed of the counter, allowing the algorithm to adapt to different workload patterns.

LFU Observations

Empirical tests show that the LFU counter grows roughly like a square‑root curve with access count, and that distinguishing between keys with similar frequencies can be difficult unless the access count difference is large.

Choosing Between LRU and LFU

For workloads with relatively uniform access patterns and stable CPU/OPS usage, LRU is usually sufficient. For workloads with strong temporal locality—such as flash sales or promotional events—LFU better captures hot keys and prevents them from being evicted.

When memory usage approaches the maxmemory limit, Redis’s random sampling reduces the effectiveness of LFU, and even LFU may evict hot data.

Conclusion

The article provides a detailed overview of Redis’s LRU and LFU eviction strategies, their internal data structures, code implementations, and practical considerations for selecting the appropriate policy based on the characteristics of the stored data and the operational environment.

RedisLRUdatabasesCache EvictionLFU
Architect
Written by

Architect

Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.