Databases 10 min read

Redis Memory Limits, Configuration, and Eviction Policies (LRU & LFU)

This article explains how to set Redis's maximum memory usage via configuration files or commands, describes the built‑in eviction strategies including noeviction, allkeys‑lru, volatile‑lru, allkeys‑random, volatile‑random, volatile‑ttl, and shows how to query and change these policies, while also covering LRU fundamentals, a Java LRU cache example, Redis's approximate LRU implementation, its 3.0 optimizations, and the newer LFU eviction algorithm.

Java Captain
Java Captain
Java Captain
Redis Memory Limits, Configuration, and Eviction Policies (LRU & LFU)

Redis is an in‑memory key‑value store, and its memory usage can be limited by configuring a maximum memory size because system RAM is finite.

Configuring Maximum Memory

1. Via configuration file : Add maxmemory 100mb to redis.conf . The file used at startup can be specified with a command‑line argument.

2. Via runtime command : Execute CONFIG SET maxmemory 100mb to change the limit on the fly, and CONFIG GET maxmemory to retrieve the current setting. If the limit is 0 or unset, Redis has no limit on 64‑bit systems and up to 3 GB on 32‑bit systems.

Memory Eviction Policies

When the configured memory is exhausted, Redis applies one of several eviction strategies:

noeviction (default): write commands are rejected.

allkeys‑lru : evicts the least recently used key among all keys.

volatile‑lru : evicts the least recently used key among keys with an expiration.

allkeys‑random : evicts a random key among all keys.

volatile‑random : evicts a random key among keys with an expiration.

volatile‑ttl : evicts keys with the nearest expiration time first.

When using volatile‑lru , volatile‑random , or volatile‑ttl , if no eligible key exists the behavior is the same as noeviction .

Getting and Setting the Eviction Policy

Retrieve the current policy with CONFIG GET maxmemory-policy . Set it in redis.conf using maxmemory-policy allkeys-lru or change it at runtime with CONFIG SET maxmemory-policy allkeys-lru .

LRU Algorithm

LRU (Least Recently Used) evicts data that has not been accessed for the longest time, assuming it is unlikely to be needed soon.

Java Example of a Simple LRU Cache

public class LRUCache<k, v> {
    // capacity, node map, doubly‑linked list implementation
    // ... (full Java code from the source) ...
}

The code implements a basic LRU cache with a doubly‑linked list and a hash map.

Approximate LRU in Redis

Redis uses an approximate LRU algorithm: it randomly samples a configurable number of keys (default 5) and evicts the least recently used among the sample. The sample size can be changed with maxmemory-samples .

Each key stores a 24‑bit timestamp of its last access to support this approximation.

Redis 3.0 Optimizations

Redis 3.0 introduces a candidate pool (size 16) that keeps sampled keys sorted by access time. Newly sampled keys replace entries only if their timestamp is older than the oldest entry in the pool, improving eviction accuracy.

LFU Algorithm (Redis 4.0+)

LFU (Least Frequently Used) evicts keys based on access frequency rather than recency. It offers two policies:

volatile‑lfu : applies to keys with an expiration.

allkeys‑lfu : applies to all keys.

LFU better captures key “hotness” and avoids the situation where a rarely accessed key becomes a hotspot under LRU.

Conclusion

The article ends with a discussion prompt asking why Redis chooses an approximate LRU instead of a strict LRU, encouraging readers to think about the trade‑offs.

JavaMemory ManagementDatabaseRedisLRULFUEviction Policy
Java Captain
Written by

Java Captain

Focused on Java technologies: SSM, the Spring ecosystem, microservices, MySQL, MyCat, clustering, distributed systems, middleware, Linux, networking, multithreading; occasionally covers DevOps tools like Jenkins, Nexus, Docker, ELK; shares practical tech insights and is dedicated to full‑stack Java development.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.