Databases 12 min read

Understanding Redis Memory Limits and Eviction Policies (LRU, LFU)

This article explains how to set Redis's maximum memory usage, configure eviction policies such as noeviction, allkeys‑lru, volatile‑lru, allkeys‑random, and LFU, demonstrates command‑line and configuration‑file methods, and provides a Java implementation of an LRU cache.

Top Architect
Top Architect
Top Architect
Understanding Redis Memory Limits and Eviction Policies (LRU, LFU)

Understanding Redis Memory Limits and Eviction Policies (LRU, LFU)

Redis is an in‑memory key‑value database, and because system memory is finite you can configure the maximum amount of memory Redis is allowed to use.

Setting Redis Maximum Memory

1. Via configuration file

Add the following line to redis.conf (or any configuration file you start Redis with):

maxmemory 100mb

If you start Redis with a custom configuration file, specify it with the --config (or -c ) command‑line argument.

2. Via runtime command

Redis allows you to change the memory limit while it is running:

127.0.0.1:6379> config set maxmemory 100mb

To query the current setting:

127.0.0.1:6379> config get maxmemory

If the limit is not set or set to 0 , Redis will use all available memory on a 64‑bit OS, or up to ~3 GB on a 32‑bit OS.

Redis Memory Eviction Policies

When the configured memory limit is reached, Redis can evict keys according to one of several policies:

noeviction – write commands return an error (except DEL and a few special commands).

allkeys‑lru – LRU eviction among all keys.

volatile‑lru – LRU eviction among keys with an expiration time.

allkeys‑random – random eviction among all keys.

volatile‑random – random eviction among expiring keys.

volatile‑ttl – evict keys with the shortest remaining TTL first.

When using volatile‑lru , volatile‑random or volatile‑ttl , if no eligible key exists Redis behaves like noeviction .

Getting and Setting the Eviction Policy

Query the current policy:

127.0.0.1:6379> config get maxmemory-policy

Set the policy via configuration file:

maxmemory-policy allkeys-lru

Or set it at runtime:

127.0.0.1:6379> config set maxmemory-policy allkeys-lru

LRU Algorithm

The Least Recently Used (LRU) algorithm evicts the data that has not been accessed for the longest time. Redis implements an approximate LRU using random sampling.

Approximate LRU in Redis

Redis samples a configurable number of keys (default 5) and evicts the least recently used among the sampled set. The sample size can be changed with maxmemory-samples :

maxmemory-samples 10

Increasing the sample size makes the eviction behavior closer to a true LRU. Redis stores a 24‑bit timestamp with each key to track its last access.

Redis 3.0 Optimizations

Redis 3.0 introduces a candidate pool (size 16) that keeps keys sorted by last access time, improving the accuracy of the approximate LRU.

LFU Algorithm

Starting with Redis 4.0, the Least Frequently Used (LFU) eviction policy is available. LFU evicts keys that are accessed the fewest times, which better reflects key “hotness”.

Two LFU policies exist:

volatile‑lfu – LFU among expiring keys.

allkeys‑lfu – LFU among all keys.

These policies can only be set on Redis 4.0 or newer.

Java Example of a Simple LRU Cache

public class LRUCache<K, V> {
    // capacity
    private int capacity;
    // current size
    private int count;
    // map of keys to nodes
    private Map<K, Node<K, V>> nodeMap;
    private Node<K, V> head;
    private Node<K, V> tail;

    public LRUCache(int capacity) {
        if (capacity < 1) {
            throw new IllegalArgumentException(String.valueOf(capacity));
        }
        this.capacity = capacity;
        this.nodeMap = new HashMap<>();
        Node headNode = new Node(null, null);
        Node tailNode = new Node(null, null);
        headNode.next = tailNode;
        tailNode.pre = headNode;
        this.head = headNode;
        this.tail = tailNode;
    }

    public void put(K key, V value) {
        Node<K, V> node = nodeMap.get(key);
        if (node == null) {
            if (count >= capacity) {
                // remove least recently used node
                removeNode();
            }
            node = new Node<>(key, value);
            addNode(node);
        } else {
            // move existing node to head (most recent)
            moveNodeToHead(node);
        }
    }

    public Node<K, V> get(K key) {
        Node<K, V> node = nodeMap.get(key);
        if (node != null) {
            moveNodeToHead(node);
        }
        return node;
    }

    private void removeNode() {
        Node node = tail.pre;
        // detach from list
        removeFromList(node);
        nodeMap.remove(node.key);
        count--;
    }

    private void removeFromList(Node<K, V> node) {
        Node pre = node.pre;
        Node next = node.next;
        pre.next = next;
        next.pre = pre;
        node.next = null;
        node.pre = null;
    }

    private void addNode(Node<K, V> node) {
        // add to head
        addToHead(node);
        nodeMap.put(node.key, node);
        count++;
    }

    private void addToHead(Node<K, V> node) {
        Node next = head.next;
        next.pre = node;
        node.next = next;
        node.pre = head;
        head.next = node;
    }

    public void moveNodeToHead(Node<K, V> node) {
        // detach and re‑attach at head
        removeFromList(node);
        addToHead(node);
    }

    class Node<K, V> {
        K key;
        V value;
        Node<K, V> pre;
        Node<K, V> next;
        public Node(K key, V value) {
            this.key = key;
            this.value = value;
        }
    }
}

This Java implementation demonstrates the core operations of an LRU cache: insertion, lookup, eviction of the least‑recently‑used entry, and internal doubly‑linked‑list management.

Discussion Prompt

Why does Redis use an approximate LRU algorithm instead of a precise LRU? Share your thoughts in the comments.

JavaMemory ManagementRedisconfigurationLRULFUEviction Policy
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.