Databases 7 min read

Understanding Redis Memory Eviction Strategies

This article explains how Redis handles memory pressure using configurable maxmemory limits and a variety of eviction policies—including noeviction, volatile‑lru, volatile‑lfu, allkeys‑lru, and allkeys‑random—while offering guidance on selecting appropriate policies and sizing cache capacity for optimal performance.

IT Services Circle
IT Services Circle
IT Services Circle
Understanding Redis Memory Eviction Strategies

Redis Memory Eviction Strategies

When the amount of data that expires in Redis grows beyond what can be removed lazily, the server resorts to its memory eviction mechanism to free space.

Configuring Memory Limits

The total memory that Redis may use is controlled by the maxmemory setting, which can be applied at runtime with CONFIG SET maxmemory 4gb or persisted in redis.conf . On 64‑bit systems a value of 0 means no limit, while on 32‑bit systems the implicit ceiling is 3 GB.

Eviction Policies Overview

Redis provides nine policies, grouped into two broad categories: policies that never evict ( noeviction ) and policies that evict keys based on different criteria.

noeviction (No Eviction)

When memory usage exceeds maxmemory , Redis simply returns an error for new write commands, effectively refusing new data.

Volatile (Key‑Specific) Policies

volatile-lru : evicts the least‑recently‑used keys that have an expiration time.

volatile-lfu : (added in Redis 4.0) evicts keys with the lowest access frequency among those with an expiration.

volatile-random : randomly evicts an expiring key.

volatile-ttl : evicts the key whose time‑to‑live is closest to expiration.

Allkeys (Global) Policies

allkeys-lru : evicts the least‑recently‑used key regardless of expiration.

allkeys-lfu : evicts the least‑frequently‑used key globally.

allkeys-random : randomly evicts any key.

When to Use Specific Policies

allkeys-lru is ideal when your workload exhibits clear hot‑cold data patterns, keeping frequently accessed items in memory.

allkeys-random works well when data access is uniformly distributed and no clear hot set exists.

volatile-lru suits scenarios where some keys must never be evicted (e.g., pinned content) while expiring keys can be removed based on recent usage.

Choosing Cache Size

Cache size should balance cost and benefit; a common rule of thumb is to allocate roughly 15 %–30 % of the total dataset, reflecting the 80/20 principle but adjusted for real‑world access patterns.

LRU vs LFU

volatile-lru uses a classic Least‑Recently‑Used algorithm, whereas volatile-lfu combines recency with frequency, removing the least‑accessed keys.

Further details on the exact algorithms will be covered in a future article.

redisLRULFUMemory Evictionmaxmemoryeviction policies
IT Services Circle
Written by

IT Services Circle

Delivering cutting-edge internet insights and practical learning resources. We're a passionate and principled IT media platform.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.