Unlocking Redis LFU: How the Least Frequently Used Algorithm Works
This article explains Redis's LFU eviction algorithm, its implementation details, code-level mechanics, configuration steps, and how it improves upon LRU by using frequency-based counters and decay to make smarter cache eviction decisions.
1 Introduction
In the previous article we introduced Redis LFU's predecessor LRU and its obvious drawbacks, such as stability and performance issues, extra space for linked lists, mismatched access order, and limited prediction of future accesses.
LRU can cause frequent eviction and loading of data that is only used intermittently, reducing cache efficiency.
The linked list used to track access order consumes additional memory.
Access patterns are not always time‑based; LRU may evict data we still need.
LRU only predicts recent usage, which can mistakenly evict high‑frequency keys that have a temporary idle period.
The diagram shows that Key 1 would be evicted first, even though its access frequency is higher than Key 2; we would prefer an eviction order where Key 2 > Key 1. To address these issues, algorithms like LFU (Least Frequently Used) and FIFO have been proposed.
2 Implementation Principle
LFU, introduced in Redis 4.0, evicts keys based on their access frequency and recency, emphasizing frequently used items when cache capacity is limited.
The algorithm records a usage count for each cache entry and evicts the entries with the lowest counts when memory is full.
While LFU effectively prevents cache overflow and reduces the chance of removing important data, it requires additional memory to store counters and incurs higher computational overhead.
LFU is commonly used in web caches, database caches, and file‑system caches to improve performance and stability.
Implementation details:
LFU approximates LRU by using a probabilistic Morris counter to estimate access frequency and a decay period that gradually reduces counters over time, allowing the algorithm to adapt to changing access patterns. Redis 4.0 added two LFU policies:
allkeys‑lfu: applies LFU to all keys.
volatile‑lfu: applies LFU only to keys with an expiration time.
3 Algorithm Implementation
3.1 Understanding the Source Code
In LFU mode, the 24‑bit lru field of a Redis object is split: the high 16 bits store the last decrement time (ldt), and the low 8 bits store the logarithmic access counter (logc).
The high 16‑bit ldt records the last time the counter was decreased; it wraps roughly every 45.5 days.
The low 8‑bit logc stores a logarithmic count (max 255). New keys start with LFU_INIT_VAL (default 5) to avoid immediate eviction.
<code>16 bits 8 bits
+----------------+--------+
| Last decr time | LOG_C |
+----------------+--------+
</code>Source for calculating Last Decrement Time :
<code>/* Return the current time in minutes, just taking the least significant
* 16 bits. The returned time is suitable to be stored as LDT (last decrement
* time) for the LFU implementation. */
unsigned long LFUGetTimeInMinutes(void) {
return (server.unixtime/60) & 65535;
}
/* Given an object last access time, compute the minimum number of minutes
* that elapsed since the last access. Handle overflow (ldt greater than
* the current 16 bits minutes time) considering the time as wrapping
* exactly once. */
unsigned long LFUTimeElapsed(unsigned long ldt) {
unsigned long now = LFUGetTimeInMinutes();
if (now >= ldt) return now-ldt;
return 65535-ldt+now;
}
</code>Source for the logarithmic counter increment:
<code>/* Logarithmically increment a counter. The greater is the current counter value
* the less likely is that it gets really implemented. Saturate it at 255. */
uint8_t LFULogIncr(uint8_t counter) {
if (counter == 255) return 255;
double r = (double)rand()/RAND_MAX;
double baseval = counter - LFU_INIT_VAL;
if (baseval < 0) baseval = 0;
double p = 1.0/(baseval*server.lfu_log_factor+1);
if (r < p) counter++;
return counter;
}
</code>3.2 Enabling LFU in redis.conf
Modify redis.conf to set the eviction policy to volatile-lfu or allkeys-lfu :
<code># MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select one from the following behaviors:
#
# volatile-lru -> Evict using approximated LRU, only keys with an expire set.
# allkeys-lru -> Evict any key using approximated LRU.
# volatile-lfu -> Evict using approximated LFU, only keys with an expire set.
# allkeys-lfu -> Evict any key using approximated LFU.
# volatile-random -> Remove a random key having an expire set.
# allkeys-random -> Remove a random key, any key.
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
# noeviction -> Don't evict anything, just return an error on write operations.
#
# LRU means Least Recently Used
# LFU means Least Frequently Used
#
# Both LRU, LFU and volatile-ttl are implemented using approximated
# randomized algorithms.
#
# Note: with any of the above policies, when there are no suitable keys for
# eviction, Redis will return an error on write operations that require
# more memory.
#
# The default is:
# maxmemory-policy noeviction
#
# Enable LFU for keys with expiration:
# maxmemory-policy volatile-lfu
# Enable LFU for all keys:
# maxmemory-policy allkeys-lfu
</code>4 Summary
LFU (Least Frequently Used), introduced in Redis 4.0, evicts keys based on access frequency and recency, emphasizing frequently used items to decide which cache blocks to remove when memory is limited, thereby avoiding the clear shortcomings of LRU.
Architecture & Thinking
🍭 Frontline tech director and chief architect at top-tier companies 🥝 Years of deep experience in internet, e‑commerce, social, and finance sectors 🌾 Committed to publishing high‑quality articles covering core technologies of leading internet firms, application architecture, and AI breakthroughs.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.