Backend Development 7 min read

Configuring Redis Memory Size and Eviction Policies (LRU & LFU)

This article explains how to size Redis memory, configure maxmemory and maxmemory‑policy settings, and choose among various eviction strategies—including no‑eviction, allkeys‑lru, allkeys‑lfu, and volatile options—while detailing the underlying LRU and LFU algorithms used by Redis.

Top Architect
Top Architect
Top Architect
Configuring Redis Memory Size and Eviction Policies (LRU & LFU)

When the amount of cached data grows beyond the allocated capacity, Redis inevitably fills its memory and triggers its eviction mechanisms; selecting an appropriate strategy removes "unimportant" data to free space for new entries.

To set Redis memory size, you can use the config set maxmemory 5gb command (or edit the configuration file) and verify it with config get maxmemory . A common recommendation is to allocate 15%‑30% of total data size, based on the 80/20 principle.

Redis provides several eviction policies, divided into two categories: policies that never evict (e.g., noeviction ) and policies that do evict. Eviction policies include allkeys‑random , allkeys‑lru , allkeys‑lfu , volatile‑random , volatile‑ttl , volatile‑lru , volatile‑lfu , and others that filter either all keys or only keys with an expiration time.

Policies prefixed with volatile affect only keys that have an expiration set, while allkeys policies consider every key regardless of TTL.

The LRU (Least Recently Used) algorithm keeps a timestamp for each key and, when eviction is needed, selects a sample of keys and removes the one with the oldest timestamp. Redis optimizes this by storing the last access time in the lru field and using a configurable sample size via config set maxmemory-samples .

The LFU (Least Frequently Used) algorithm, added in Redis 4.0, extends LRU by adding an 8‑bit counter to the lru field, tracking how often a key is accessed. When evicting, Redis first removes keys with the lowest access count, and if counts are equal, it falls back to the oldest timestamp.

Both LRU and LFU are useful: LRU works well for workloads where recent access predicts future use, while LFU helps when some data is accessed very infrequently, preventing such items from occupying cache space unnecessarily.

backendRedisCachingLRUmemoryLFUEviction
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.