Understanding Redis Memory Limits and Eviction Policies
This article explains how to configure Redis's maximum memory usage, describes the built‑in eviction strategies such as noeviction, allkeys‑lru, volatile‑lru, random and ttl policies, shows how to query and set these policies via configuration files or runtime commands, and details the LRU and LFU algorithms used by Redis, including Java sample code and recent improvements in Redis 3.0 and 4.0.
Redis is an in‑memory key‑value database, and its memory usage must be limited to avoid crashes. The maximum memory can be set in the redis.conf file with the directive maxmemory 100mb or dynamically at runtime using the CONFIG SET maxmemory 100mb command. If no limit is set, Redis on a 64‑bit OS can use all available memory, while on a 32‑bit OS it is capped at about 3 GB.
When the configured memory limit is reached, Redis applies one of several eviction policies:
noeviction – write commands are rejected.
allkeys‑lru – the least recently used key among all keys is evicted.
volatile‑lru – LRU eviction is applied only to keys with an expiration time.
allkeys‑random – a random key is evicted.
volatile‑random – a random key with an expiration time is evicted.
volatile‑ttl – keys with the nearest expiration are evicted first.
The current eviction policy can be retrieved with CONFIG GET maxmemory-policy and changed either in redis.conf (e.g., maxmemory-policy allkeys-lru ) or at runtime with CONFIG SET maxmemory-policy allkeys-lru .
Redis implements an approximate LRU algorithm: it samples a configurable number of keys (default 5) and evicts the least recently used among the sample. The sample size can be adjusted with maxmemory-samples . Redis 3.0 introduced a candidate pool of 16 keys to improve accuracy, selecting the key with the smallest last‑access timestamp when eviction is needed.
Since Redis 4.0, a Least Frequently Used (LFU) eviction policy is available ( volatile‑lfu and allkeys‑lfu ). LFU evicts keys that have been accessed the fewest times, providing a better measure of key “hotness” than LRU.
A simple Java implementation of an LRU cache is provided to illustrate the algorithm. The code defines a doubly‑linked list with a hash map for O(1) get and put operations, and includes methods to add, remove, and move nodes to the head of the list.
Overall, the article guides readers on configuring Redis memory limits, choosing appropriate eviction strategies, and understanding the underlying LRU/LFU mechanisms.
Architect's Tech Stack
Java backend, microservices, distributed systems, containerized programming, and more.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.