Databases 9 min read

Configuring Redis Memory Size and Understanding Eviction Policies (LRU, LFU)

This article explains how to set Redis memory limits, choose appropriate eviction policies such as noeviction, allkeys‑lru, allkeys‑lfu, volatile‑ttl, and describes the underlying LRU and LFU algorithms, including their implementation details and practical configuration commands.

Architecture Digest
Architecture Digest
Architecture Digest
Configuring Redis Memory Size and Understanding Eviction Policies (LRU, LFU)

Generally, cache capacity is smaller than the total data size, so as the cache fills up Redis will inevitably run out of memory, triggering its eviction mechanism. You need to select a strategy to remove "unimportant" data and free space for new entries.

Configuring Redis memory size

According to the 80/20 principle, setting Redis memory to about 20% of total data can potentially serve 80% of requests. In practice, a cache size of 15%–30% of total data is recommended, depending on the workload.

Example (set to 5GB, default unit is bytes):

config set maxmemory 5gb

You can also query the current setting:

config get maxmemory

Redis eviction policies

Before Redis 4.0 there were six policies; version 4.0 added two more, introducing the LFU algorithm. The default policy is noeviction , which never evicts data and throws an error when memory is exhausted.

Eviction policies are divided into two groups:

Non‑eviction policies : noeviction – no data is removed; if memory is insufficient, operations fail.

Eviction policies : allkeys‑random – random removal of any key. allkeys‑lru – uses the LRU algorithm. allkeys‑lfu – uses the LFU algorithm. volatile‑random – random removal of keys with an expiration. volatile‑ttl – removes keys with the nearest expiration time first. volatile‑lru – LRU on keys with an expiration. volatile‑lfu – LFU on keys with an expiration.

Note: policies starting with volatile only affect keys that have an expiration set, while allkeys policies consider every key regardless of expiration.

To set an eviction policy, for example allkeys‑lru :

config set maxmemory-policy allkeys-lru

LRU algorithm

LRU (Least Recently Used) evicts the least recently accessed data, keeping frequently used items. Redis implements LRU by recording a timestamp for each key and, when eviction is needed, sampling a set of keys and removing the one with the smallest timestamp.

Key implementation details:

Each key stores the last access time in the lru field of its internal object.

On the first eviction, Redis randomly selects N keys as a candidate set (N configurable via config set maxmemory-samples ).

Subsequent evictions add keys whose LRU value is lower than the smallest value in the current candidate set.

When the candidate set reaches the configured sample size, the key with the smallest LRU is evicted.

LFU algorithm

LFU (Least Frequently Used) evicts keys with the lowest access count, introduced in Redis 4.0. It splits the original 24‑bit lru field into a 16‑bit timestamp ( ldt ) and an 8‑bit access counter. When evicting, Redis first removes keys with the smallest counter; if counters are equal, the older timestamp is used.

LFU is useful for workloads where many keys are accessed only once; such keys would have low counters and can be removed even if they were accessed recently, addressing a limitation of pure LRU.

Reference: "Redis Core Technology and Practice"

Memory ManagementRedisLRUdatabasesLFUEviction Policy
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.