Databases 13 min read

Implementation of Redis LRU and LFU Cache Eviction Algorithms

Redis implements approximate LRU and LFU eviction policies by sampling keys and using a compact 24‑bit field to store timestamps and counters, where LRU evicts the least recently accessed items and LFU evicts those with low, decay‑adjusted access frequency, each with trade‑offs for different workloads.

vivo Internet Technology
vivo Internet Technology
vivo Internet Technology
Implementation of Redis LRU and LFU Cache Eviction Algorithms

Redis is an in‑memory high‑performance NoSQL database that can handle tens of thousands of read/write requests per second. Because memory is limited, Redis implements a set of cache eviction policies to control memory usage.

Since version 4.0, Redis provides eight eviction policies, four of which are based on LRU (Least Recently Used) or LFU (Least Frequently Used) algorithms. This article introduces the LRU and LFU implementations in Redis, analyzes their effectiveness, and discusses their drawbacks.

1. Redis LRU Implementation

1.1 LRU algorithm principle

LRU evicts the data that has not been accessed for the longest time, assuming that recent accesses predict future accesses. It is widely used in operating systems, MySQL buffer pools, and Redis eviction.

Redis approximates LRU by randomly sampling a subset of keys instead of sorting all keys, which reduces overhead while keeping O(1) lookup using a hash table combined with a doubly linked list.

Key data structure:

#define LRU_BITS 24
typedef struct redisObject {
    unsigned type:4;
    unsigned encoding:4;
    unsigned lru:LRU_BITS; /* LRU time (relative to global lru_clock) or
                            * LFU data (least significant 8 bits frequency
                            * and most significant 16 bits access time). */
    int refcount;
    void *ptr;
} robj;

The LRU clock unit is seconds by default; it can be changed via LRU_CLOCK_RESOLUTION and is updated according to server.hz :

unsigned int LRU_CLOCK(void) {
    unsigned int lruclock;
    if (1000/server.hz <= LRU_CLOCK_RESOLUTION) {
        atomicGet(server.lruclock,lruclock);
    } else {
        lruclock = getLRUClock();
    }
    return lruclock;
}

Because the lru field occupies only 24 bits, the timestamp wraps after about 194 days, which can cause the LRU policy to lose effectiveness for long‑running instances.

1.2 LRU drawbacks

LRU only considers recency, ignoring access frequency, which may evict hot data that is accessed frequently but not recently. This limitation motivates the use of LFU.

2. Redis LFU Implementation

2.1 LFU algorithm principle

LFU (Least Frequently Used) evicts items with the lowest access frequency, but frequency is combined with a time‑decay function so that recent accesses weigh more heavily than older ones.

Redis reuses the lru field to store both a 16‑bit timestamp and an 8‑bit counter (the LFU “counter”).

LFU counter initialization:

#define LFU_INIT_VAL 5

Counter increment algorithm (logarithmic increase):

uint8_t LFULogIncr(uint8_t counter) {
  if (counter == 255) return 255;
  double r = (double)rand()/RAND_MAX;
  double baseval = counter - LFU_INIT_VAL;
  if (baseval < 0) baseval = 0;
  double p = 1.0/(baseval*server.lfu_log_factor+1);
  if (r < p) counter++;
  return counter;
}

Key observations:

When counter <= LFU_INIT_VAL , the increment probability is close to 100%.

When counter > LFU_INIT_VAL , the increment probability decays logarithmically.

At counter == 255 the value saturates and no longer increments.

LFU also applies a time‑decay function to reduce the counter over time:

unsigned long LFUDecrAndReturn(robj *o) {
    unsigned long ldt = o->lru >> 8;
    unsigned long counter = o->lru & 255;
    unsigned long num_periods = server.lfu_decay_time ? LFUTimeElapsed(ldt) / server.lfu_decay_time : 0;
    if (num_periods)
        counter = (num_periods > counter) ? 0 : counter - num_periods;
    return counter;
}

Empirical analysis (using Python simulations) shows that after the counter exceeds LFU_INIT_VAL , the increment probability drops sharply, forming a curve that approaches zero as the counter approaches 255. Real‑world measurements reveal a roughly square‑root growth of the counter with increasing access counts, indicating limited granularity for distinguishing “warm” keys.

3. Summary

Both LRU and LFU are approximated in Redis to achieve O(1) performance. Choosing between them depends on workload characteristics:

If access patterns are relatively uniform and there is no pronounced hot‑cold data split, LRU is sufficient and simpler to configure.

If the system experiences bursty traffic with clear hot items (e.g., during promotions), LFU can better retain frequently accessed keys.

When memory usage approaches the maxmemory limit, the random sampling nature of both policies limits their effectiveness, and careful capacity planning is required.

References:

Key eviction

Redis的LRU缓存淘汰算法实现(上)

Redis 缓存淘汰策略以及 LRU、LFU 算法

algorithmMemory ManagementRedisLRUCache EvictionLFU
vivo Internet Technology
Written by

vivo Internet Technology

Sharing practical vivo Internet technology insights and salon events, plus the latest industry news and hot conferences.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.