Understanding Redis Memory Management: maxmemory Configuration and Eviction Policies
This article explains how Redis uses memory as a cache, details the maxmemory setting and its configuration methods, describes various eviction policies—including LRU, LFU, and random strategies—and outlines expiration handling, replication considerations, and best‑practice recommendations for stable high‑load deployments.
Redis is commonly used as a cache to accelerate read access to slower servers or databases. Because cache entries are copies of persistent data, they can be safely evicted when memory is exhausted and later re‑cached if needed.
The maxmemory configuration option controls the maximum amount of memory a Redis instance may use, helping to prevent the process from consuming excessive RAM and degrading system performance.
There are two ways to set maxmemory :
1. Edit the redis.conf file and add a line such as maxmemory 100mb . 2. Use the CONFIG SET maxmemory 100mb command at runtime from a Redis client.
By default, on 64‑bit systems maxmemory is 0 (no limit), while on 32‑bit systems it defaults to 3 GB, the maximum addressable space. Setting it to 0 disables memory limiting, which may be useful for testing but risky in production.
When memory reaches the maxmemory limit, Redis applies the policy defined by maxmemory-policy . Common policies include:
noeviction : Returns an error when a command would exceed the limit.
volatile‑lru : Evicts least‑recently‑used keys with an expiration time.
allkeys‑lru : Evicts least‑recently‑used keys regardless of expiration.
volatile‑lfu : Evicts least‑frequently‑used keys with an expiration time.
allkeys‑lfu : Evicts least‑frequently‑used keys overall.
volatile‑random : Randomly evicts keys with an expiration time.
allkeys‑random : Randomly evicts any key.
volatile‑ttl : Evicts keys with the shortest remaining time‑to‑live.
Guidelines for choosing a policy: use allkeys‑lru when a small subset of keys is accessed far more frequently; allkeys‑random when access frequencies are roughly equal; volatile‑ttl when you can assign short TTLs to good eviction candidates; and consider separate Redis instances for caching and persistent storage if using volatile‑lru or volatile‑random .
The LRU (Least Recently Used) algorithm approximates true LRU by sampling a configurable number of keys (default 5) and evicting the least recently accessed among them. Larger sample sizes improve accuracy but increase CPU usage.
The LFU (Least Frequently Used) algorithm tracks how often a key is accessed, providing a more stable eviction decision that avoids mistakenly removing hot keys that were only accessed once.
Redis also manages expired keys through two mechanisms: lazy deletion (removing a key when it is accessed after its TTL has passed) and periodic deletion (scanning a random subset of keys every 10 seconds and deleting those that have expired).
In replication and persistence scenarios, Redis reserves additional RAM for buffers used by replication or AOF writes; this buffer memory is not counted against maxmemory . When using replication, slave nodes never delete expired keys on their own; the master sends explicit delete commands to keep replicas consistent.
By properly configuring maxmemory and selecting an appropriate eviction policy, you can effectively manage Redis memory usage and ensure stable operation under high load and large data volumes.
Cognitive Technology Team
Cognitive Technology Team regularly delivers the latest IT news, original content, programming tutorials and experience sharing, with daily perks awaiting you.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.