Mastering Redis Memory Eviction: Which Policy Fits Your Use Case?
This guide explains Redis's eight memory eviction policies, their algorithms, ideal scenarios, and step‑by‑step configuration via redis.conf, CONFIG SET, or command‑line options, helping you choose the right strategy to balance performance, data safety, and resource limits.
Redis uses the maxmemory-policy setting to decide which keys to evict when the configured memory limit is reached. The server supports eight distinct eviction policies, each with its own algorithm and use‑case.
noeviction
The default policy. When memory reaches the limit, Redis rejects new write commands and returns an OOM error without evicting any keys.
Suitable scenarios
Data must never be lost (critical data).
Plenty of memory or a small dataset.
Redis used as a reliable message queue.
Cache warm‑up phase where data should stay until the cache stabilises.
How to configure
In redis.conf add maxmemory-policy noeviction.
At runtime: CONFIG SET maxmemory-policy noeviction.
Command‑line: start Redis with --maxmemory-policy noeviction.
Key points
Redis records memory usage but never evicts keys.
Writes that would increase memory are rejected with an error like
(error) OOM command not allowed when used memory > 'maxmemory'.
Only explicit deletions or expirations free memory, so careful monitoring is required.
allkeys-lru
Evicts the least‑recently‑used keys among all keys, even those without an expiration.
Typical use cases
Cache workloads that need to keep recently accessed data.
Uniform access patterns where no clear hot keys exist.
Environments with strict memory caps.
How to configure
In redis.conf: maxmemory-policy allkeys-lru.
Runtime: CONFIG SET maxmemory-policy allkeys-lru.
Command‑line: --maxmemory-policy allkeys-lru.
Details
Redis implements an approximate LRU by sampling random keys.
The maxmemory-samples parameter controls sample size, affecting eviction accuracy and CPU usage.
allkeys-lfu
Evicts the least‑frequently‑used keys among all keys, regardless of expiration.
Typical use cases
Non‑uniform access where some keys are hot.
Workloads that must preserve frequently accessed data.
Scenarios where data importance correlates with access frequency.
How to configure
In redis.conf: maxmemory-policy allkeys-lfu.
Runtime: CONFIG SET maxmemory-policy allkeys-lfu.
Command‑line: --maxmemory-policy allkeys-lfu.
Details
LFU is also approximate; Redis tracks an access counter per key.
Decay of counters can be tuned with lfu-decay-time.
volatile-lru
Applies LRU eviction only to keys that have an explicit expiration.
Typical use cases
Caching data with TTL.
Temporary session or computation results.
Protecting non‑expiring hot data from accidental eviction.
How to configure
In redis.conf: maxmemory-policy volatile-lru.
Runtime: CONFIG SET maxmemory-policy volatile-lru.
Command‑line: --maxmemory-policy volatile-lru.
Key points
Only keys with an expiration are considered for eviction.
Uses the same approximate LRU algorithm as allkeys-lru.
Monitor memory and eviction events to avoid unexpected data loss.
volatile-lfu
Evicts the least‑frequently‑used keys among those that have an expiration.
Typical use cases
Caching where some expired keys are accessed rarely.
Balancing hot temporary data against rarely used temporary data.
How to configure
In redis.conf: maxmemory-policy volatile-lfu.
Runtime: CONFIG SET maxmemory-policy volatile-lfu.
Command‑line: --maxmemory-policy volatile-lfu.
Details
LFU counters are kept only for keys with an expiration.
Decay behaviour can be adjusted with lfu-decay-time.
volatile-random
Randomly evicts keys that have an expiration when memory is full.
Typical use cases
When eviction logic should be simple and fairness is acceptable.
Temporary data where occasional loss is tolerable.
How to configure
In redis.conf: maxmemory-policy volatile-random.
Runtime: CONFIG SET maxmemory-policy volatile-random.
Command‑line: --maxmemory-policy volatile-random.
Key points
Random eviction is fast and requires no tracking of usage statistics.
All expiring keys have an equal chance of being removed.
allkeys-random
Randomly evicts any key, regardless of expiration.
Typical use cases
Caches containing non‑critical data where simplicity is preferred.
Load‑balanced clusters needing a uniform eviction distribution.
Emergency situations where a quick memory release is needed.
How to configure
In redis.conf: maxmemory-policy allkeys-random.
Runtime: CONFIG SET maxmemory-policy allkeys-random.
Command‑line: --maxmemory-policy allkeys-random.
Key points
Random eviction is simple, fast, and treats all keys equally.
May discard important data, so it should be used only for non‑essential caches.
volatile-ttl
Evicts keys with the shortest remaining Time‑To‑Live among those that have an expiration.
Typical use cases
Temporary data where the most imminent expirations should be cleared first.
Real‑time feeds or news where freshness is critical.
Memory‑constrained environments that need deterministic TTL‑based eviction.
How to configure
In redis.conf: maxmemory-policy volatile-ttl.
Runtime: CONFIG SET maxmemory-policy volatile-ttl.
Command‑line: --maxmemory-policy volatile-ttl.
Details
Redis scans keys with expirations and removes those with the smallest TTL until memory usage falls below the limit.
Does not use maxmemory-samples, but still requires periodic TTL checks.
Monitoring and cautions
All eviction policies should be monitored via INFO memory and MEMORY STATS to ensure expected behaviour.
Choose a policy that matches your data criticality, access pattern, and performance requirements.
When using policies that may discard data, implement fallback mechanisms or backups if data loss is unacceptable.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Java Architecture Stack
Dedicated to original, practical tech insights—from skill advancement to architecture, front‑end to back‑end, the full‑stack path, with Wei Ge guiding you.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
