Databases 12 min read

Understanding Redis Memory Limits and Eviction Policies (LRU, LFU)

This article explains how to configure Redis's maximum memory usage, describes the various eviction strategies such as noeviction, allkeys‑lru, volatile‑lru, and introduces both LRU and LFU algorithms with Java examples and practical configuration commands.

Architecture Digest
Architecture Digest
Architecture Digest
Understanding Redis Memory Limits and Eviction Policies (LRU, LFU)

Redis Memory Usage

Redis is an in‑memory key‑value store; because physical memory is limited, you can configure the maximum memory Redis may use.

1. Configure via redis.conf

Add the following line to the configuration file:

//设置Redis最大占用内存大小为100M
maxmemory 100mb
The configuration file used at startup can be specified with a command‑line argument; it is not necessarily the redis.conf under the installation directory.

2. Change at runtime with CONFIG command

Redis allows you to modify the limit while it is running:

//设置Redis最大占用内存大小为100M
127.0.0.1:6379> config set maxmemory 100mb
//获取设置的Redis能使用的最大内存大小
127.0.0.1:6379> config get maxmemory
If maxmemory is 0 or not set, Redis on 64‑bit systems has no limit; on 32‑bit systems the limit is about 3 GB.

Redis Eviction Policies

When the configured memory limit is reached, Redis can evict keys according to several strategies:

noeviction – write commands return an error (except DEL and some special commands).

allkeys‑lru – LRU eviction among all keys.

volatile‑lru – LRU eviction among keys with an expiration.

allkeys‑random – random eviction among all keys.

volatile‑random – random eviction among expiring keys.

volatile‑ttl – evict keys with the shortest remaining TTL.

When using volatile‑lru, volatile‑random, or volatile‑ttl, if no eligible key exists the command behaves like noeviction.

Getting and setting the eviction policy

Current policy:

127.0.0.1:6379> config get maxmemory-policy

Set via configuration file:

maxmemory-policy allkeys-lru

Set at runtime:

127.0.0.1:6379> config set maxmemory-policy allkeys-lru

LRU Algorithm

What is LRU?

LRU (Least Recently Used) evicts the least recently accessed items when the cache is full.

Simple Java implementation

public class LRUCache
{
    //容量
    private int capacity;
    //当前有多少节点的统计
    private int count;
    //缓存节点
    private Map
> nodeMap;
    private Node
head;
    private Node
tail;

    public LRUCache(int capacity) {
        if (capacity < 1) {
            throw new IllegalArgumentException(String.valueOf(capacity));
        }
        this.capacity = capacity;
        this.nodeMap = new HashMap<>();
        //初始化头节点和尾节点,利用哨兵模式减少判断头结点和尾节点为空的代码
        Node headNode = new Node(null, null);
        Node tailNode = new Node(null, null);
        headNode.next = tailNode;
        tailNode.pre = headNode;
        this.head = headNode;
        this.tail = tailNode;
    }

    public void put(k key, v value) {
        Node
node = nodeMap.get(key);
        if (node == null) {
            if (count >= capacity) {
                //先移除一个节点
                removeNode();
            }
            node = new Node<>(key, value);
            //添加节点
            addNode(node);
        } else {
            //移动节点到头节点
            moveNodeToHead(node);
        }
    }

    public Node
get(k key) {
        Node
node = nodeMap.get(key);
        if (node != null) {
            moveNodeToHead(node);
        }
        return node;
    }

    private void removeNode() {
        Node node = tail.pre;
        //从链表里面移除
        removeFromList(node);
        nodeMap.remove(node.key);
        count--;
    }

    private void removeFromList(Node
node) {
        Node pre = node.pre;
        Node next = node.next;
        pre.next = next;
        next.pre = pre;
        node.next = null;
        node.pre = null;
    }

    private void addNode(Node
node) {
        //添加节点到头部
        addToHead(node);
        nodeMap.put(node.key, node);
        count++;
    }

    private void addToHead(Node
node) {
        Node next = head.next;
        next.pre = node;
        node.next = next;
        node.pre = head;
        head.next = node;
    }

    public void moveNodeToHead(Node
node) {
        //从链表里面移除
        removeFromList(node);
        //添加节点到头部
        addToHead(node);
    }

    class Node
{
        k key;
        v value;
        Node pre;
        Node next;

        public Node(k key, v value) {
            this.key = key;
            this.value = value;
        }
    }
}
The code implements a basic LRU cache using a doubly‑linked list and a hash map.

LRU in Redis

Approximate LRU

Redis uses an approximate LRU algorithm based on random sampling (default 5 keys). The sample size can be changed with maxmemory-samples (e.g., maxmemory-samples 10 makes eviction closer to true LRU).

Redis 3.0 improvements

Redis 3.0 introduces a candidate pool of 16 entries that is kept sorted by last access time, improving eviction accuracy compared with earlier versions.

Comparing LRU variants

An experiment shows that increasing the sample size (e.g., to 10) makes Redis 3.0’s behavior close to a strict LRU, and that Redis 3.0 outperforms Redis 2.8 even with the same sample count.

LFU Algorithm

Since Redis 4.0, LFU (Least Frequently Used) is available. It evicts keys that are accessed the least often. Two policies exist:

volatile-lfu – LFU among keys with an expiration.

allkeys-lfu – LFU among all keys.

These policies require Redis 4.0 or newer; using them on older versions will result in an error.

Question

The article ends with a question asking why Redis chooses an approximate LRU instead of a strict LRU, inviting readers to discuss.

JavaMemory ManagementRedisconfigurationLRULFUEviction Policy
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.