Redis Memory Limits, Eviction Policies, and LRU/LFU Implementation
This article explains how to configure Redis's maximum memory usage, describes the various eviction strategies including noeviction, allkeys‑lru, volatile‑lru, random and ttl policies, shows how to query and set these policies via configuration files or commands, and provides Java code for a simple LRU cache while discussing Redis's approximate LRU and LFU algorithms.
Redis Memory Size
Redis is an in‑memory key‑value database, and because system memory is limited you can configure the maximum amount of memory Redis is allowed to use.
1. Configure via configuration file
Add the following line to the redis.conf file located in the Redis installation directory (or to any configuration file you specify when starting Redis):
// Set Redis maximum memory to 100M
maxmemory 100mbThe configuration file used by Redis does not have to be the one under the installation directory; you can pass a different file path as a startup argument.
2. Configure via command
Redis also supports changing the memory limit at runtime with commands.
// Set Redis maximum memory to 100M
127.0.0.1:6379> config set maxmemory 100mb
// Retrieve the current maximum memory setting
127.0.0.1:6379> config get maxmemoryIf you do not set a maximum memory size or set it to 0, Redis will have no limit on 64‑bit systems, while on 32‑bit systems it will use up to 3 GB.
Redis Memory Eviction
When the configured memory limit is reached, Redis must decide how to handle additional writes.
Redis defines several eviction strategies:
noeviction (default): write requests are rejected with an error (except DEL and some special commands).
allkeys‑lru: evicts keys using the LRU algorithm across all keys.
volatile‑lru: evicts keys with an expiration time using LRU.
allkeys‑random: evicts random keys from the entire dataset.
volatile‑random: evicts random keys that have an expiration time.
volatile‑ttl: evicts keys with an expiration time based on the smallest remaining TTL.
When using volatile‑lru, volatile‑random, or volatile‑ttl, if no eligible key exists Redis behaves like noeviction and returns an error.
How to get and set the eviction policy
Get the current eviction policy:
127.0.0.1:6379> config get maxmemory-policySet the eviction policy via the configuration file (modify redis.conf ):
maxmemory-policy allkeys-lruOr set it at runtime with a command:
127.0.0.1:6379> config set maxmemory-policy allkeys-lruLRU Algorithm
What is LRU?
LRU (Least Recently Used) is a cache replacement algorithm. When a fixed‑size cache is full, the least recently accessed items are evicted to make room for new data.
The core idea is that data not used for a while is unlikely to be needed soon, so it can be safely removed.
Below is a simple Java implementation of an LRU cache:
public class LRUCache
{
// capacity
private int capacity;
// current node count
private int count;
// map from key to node
private Map
> nodeMap;
private Node
head;
private Node
tail;
public LRUCache(int capacity) {
if (capacity < 1) {
throw new IllegalArgumentException(String.valueOf(capacity));
}
this.capacity = capacity;
this.nodeMap = new HashMap<>();
// sentinel head and tail
Node headNode = new Node(null, null);
Node tailNode = new Node(null, null);
headNode.next = tailNode;
tailNode.pre = headNode;
this.head = headNode;
this.tail = tailNode;
}
public void put(k key, v value) {
Node
node = nodeMap.get(key);
if (node == null) {
if (count >= capacity) {
// remove least recently used node
removeNode();
}
node = new Node<>(key, value);
addNode(node);
} else {
// move existing node to head (most recent)
moveNodeToHead(node);
}
}
public Node
get(k key) {
Node
node = nodeMap.get(key);
if (node != null) {
moveNodeToHead(node);
}
return node;
}
private void removeNode() {
Node node = tail.pre;
removeFromList(node);
nodeMap.remove(node.key);
count--;
}
private void removeFromList(Node
node) {
Node pre = node.pre;
Node next = node.next;
pre.next = next;
next.pre = pre;
node.next = null;
node.pre = null;
}
private void addNode(Node
node) {
addToHead(node);
nodeMap.put(node.key, node);
count++;
}
private void addToHead(Node
node) {
Node next = head.next;
next.pre = node;
node.next = next;
node.pre = head;
head.next = node;
}
public void moveNodeToHead(Node
node) {
removeFromList(node);
addToHead(node);
}
class Node
{
k key;
v value;
Node pre;
Node next;
public Node(k key, v value) {
this.key = key;
this.value = value;
}
}
}The code implements a basic LRU cache with O(1) get and put operations using a hash map and a doubly linked list.
LRU Implementation in Redis
Approximate LRU Algorithm
Redis uses an approximate LRU algorithm. Instead of a strict LRU, it randomly samples a few keys (default 5) and evicts the least recently used among the sampled set.
You can change the sample size with the maxmemory-samples parameter, e.g.:
maxmemory-samples 10
A larger sample size makes the eviction result closer to a true LRU.
Redis stores a 24‑bit timestamp for each key to track its last access time.
Redis 3.0 Approximate LRU Optimizations
Redis 3.0 introduces a candidate pool (size 16). Sampled keys are inserted into the pool only if their last‑access time is smaller than the current minimum in the pool. When the pool is full, the key with the largest timestamp (most recently used) is removed. Eviction then simply selects the key with the smallest timestamp from the pool.
LRU Algorithm Comparison
An experiment can compare strict LRU with Redis's approximate LRU by filling the memory, then adding additional data to trigger eviction. The following chart (source: segmentfault.com) visualizes the results for different sample sizes and Redis versions.
In the chart, light gray points represent evicted data, gray points represent old data that survived, and green points represent newly added data.
The results show that a sample size of 10 in Redis 3.0 produces behavior closest to a strict LRU, and even with the same sample size Redis 3.0 outperforms Redis 2.8.
LFU Algorithm
LFU (Least Frequently Used) is a new eviction policy introduced in Redis 4.0. It evicts keys that are accessed the least often, providing a better notion of data “hotness”.
LFU has two variants:
volatile‑lfu – applies LFU only to keys with an expiration time.
allkeys‑lfu – applies LFU to all keys.
These policies can be set the same way as the LRU policies, but they require Redis 4.0 or newer; attempting to set them on older versions results in an error.
Question
Why does Redis use an approximate LRU algorithm instead of a precise LRU? Feel free to discuss your answer in the comments.
References
https://redis.io/topics/lru-cache https://segmentfault.com/a/1190000016743562 https://segmentfault.com/a/1190000017555834
Source: juejin.im/post/5d674ac2e51d4557ca7fdd70
Recent Interview Series
[Issue 88] Interview: Explain how Spring injects interface beans.
[Issue 89] Interview: How many HTTP requests can a single TCP connection send?
[Issue 90] Interview: Design a large‑scale post view counter using Redis.
[Issue 91] Interview: Which design patterns are used in Spring? Name three.
[Issue 92] Interview: Explain J.U.C. (Java concurrency utilities).
Instead of searching for interview questions online, follow us now!
Selected Java Interview Questions
A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.