Databases 9 min read

Understanding Redis Memory Limits and Eviction Policies

This article explains how to configure Redis's maximum memory, describes the various eviction policies—including noeviction, allkeys‑lru, volatile‑lru, random and ttl strategies—covers how to query and set these policies, and details the LRU and LFU algorithms used by Redis for cache management.

Architecture Digest
Architecture Digest
Architecture Digest
Understanding Redis Memory Limits and Eviction Policies

Redis Memory Size

Redis is an in‑memory key‑value store; you can limit its maximum memory via configuration.

1. Configure via redis.conf

Add maxmemory 100mb to the redis.conf file.

redis's configuration file may not be the one in the installation directory; you can specify a different file when starting the server.

2. Change at runtime

Redis supports modifying memory size with commands while it is running.

// set Redis max memory to 100M
config set maxmemory 100mb

// get the configured max memory
config get maxmemory
If you do not set a max memory or set it to 0, on a 64‑bit OS memory is unlimited, while on a 32‑bit OS the limit is about 3 GB.

Redis Memory Eviction Policies

When memory is exhausted Redis can evict keys according to several policies:

noeviction (default): write requests are rejected (except DEL and some special commands).

allkeys‑lru: LRU eviction among all keys.

volatile‑lru: LRU eviction among keys with an expiration time.

allkeys‑random: random eviction among all keys.

volatile‑random: random eviction among expiring keys.

volatile‑ttl: evicts keys with the nearest expiration time.

When using volatile‑lru, volatile‑random, or volatile‑ttl, if no evictable key exists the behavior is the same as noeviction (error returned).

How to Get and Set the Eviction Policy

Get current policy:

config get maxmemory-policy

Set via configuration file (modify redis.conf):

maxmemory-policy allkeys-lru

Set at runtime:

config set maxmemory-policy allkeys-lru

LRU Algorithm

What is LRU?

LRU (Least Recently Used) is a cache replacement algorithm that evicts the least recently accessed items when the cache is full.

LRU in Redis

Approximate LRU

Redis uses an approximate LRU algorithm: it randomly samples a few keys (default 5) and evicts the least recently used among the sample.

The number of samples can be changed with the maxmemory-samples parameter; larger values make eviction closer to true LRU.

Each key stores an extra 24‑bit field that records the last access time.

Redis 3.0 Optimisation

Redis 3.0 introduces a candidate pool of size 16. Sampled keys are placed in the pool and sorted by access time; when the pool is full, a new key replaces the one with the most recent access time. Eviction simply selects the key with the smallest access time from the pool.

LRU Comparison

An experiment comparing strict LRU with Redis's approximate LRU shows that increasing the sample count (e.g., to 10) makes the behavior much closer to true LRU, and Redis 3.0 outperforms Redis 2.8 at the same sample size.

The chart uses three colors: light gray for evicted data, gray for old data that remains, and green for newly added data.

LFU Algorithm

LFU (Least Frequently Used) was added in Redis 4.0. It evicts keys that are accessed the least often.

Two LFU policies are available:

volatile‑lfu: LFU eviction among keys with an expiration time.

allkeys‑lfu: LFU eviction among all keys.

These policies can only be set on Redis 4.0 or newer; attempting to set them on older versions results in an error.

Question

Why does Redis use an approximate LRU algorithm instead of a strict LRU algorithm? This remains an open discussion point.

databaserediscachingLRUmemoryLFUEviction
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.