Redis Cache Eviction Strategies and Solutions for Penetration, Breakdown, and Avalanche
This article explains Redis eviction policies, compares strategies like allkeys‑lru, volatile‑ttl, and noeviction, and provides practical solutions for cache penetration, breakdown, and avalanche to maintain system stability under high concurrency.
The author, a senior architect, presents a technical interview‑style discussion on how Redis handles cache eviction and common cache problems.
Interview: How does Redis cache eviction work?
Eviction Strategies
noeviction: 返回错误当内存限制达到并且客户端尝试执行会让更多内存被使用的命令(大部分的写入指令,但DEL和几个例外)
allkeys-lru: 尝试回收最少使用的键(LRU),使得新添加的数据有空间存放。
volatile-lru: 尝试回收最少使用的键(LRU),但仅限于在过期集合的键,使得新添加的数据有空间存放。
allkeys-random: 回收随机的键使得新添加的数据有空间存放。
volatile-random: 回收随机的键使得新添加的数据有空间存放,但仅限于在过期集合的键。
volatile-ttl: 回收在过期集合的键,并且优先回收存活时间(TTL)较短的键,使得新添加的数据有空间存放。
volatile-lfu:从所有配置了过期时间的键中驱逐使用频率最少的键
allkeys-lfu:从所有键中驱逐使用频率最少的键If no key satisfies the eviction preconditions, the volatile‑lru , volatile‑random , and volatile‑ttl strategies behave similarly to noeviction .
Choosing the right eviction policy depends on the application's access pattern; you can adjust it at runtime and monitor hit/miss rates via the Redis INFO command.
allkeys‑lru : Suitable when access follows a power‑law distribution; a good default choice.
allkeys‑random : Works well for uniformly accessed keys or when scanning all keys sequentially.
volatile‑ttl : Use when you set explicit TTLs on cache objects and want short‑lived items to expire first.
Both allkeys‑lru and volatile‑random are useful when a single instance needs to cache and persist some keys, though running two instances is often a better solution.
Setting expiration times for keys consumes memory; therefore allkeys‑lru is more efficient when memory pressure is high.
How the Eviction Process Works
Understanding the eviction workflow is crucial:
A client issues a command that adds new data.
Redis checks memory usage; if it exceeds maxmemory , it evicts keys according to the configured policy.
The cycle repeats as more commands are executed.
Continuous memory‑boundary crossing triggers repeated eviction until usage falls below the limit.
Large commands that generate massive data (e.g., storing the result of a big set intersection) can quickly exceed the memory limit.
Interview: Solutions for Cache Penetration, Breakdown, and Avalanche
Cache Penetration
Penetration occurs when queries for non‑existent data repeatedly miss the cache and hit the database, potentially overwhelming it under high traffic.
Solutions
Common approaches include using a Bloom filter to pre‑filter impossible keys, or caching empty results with a very short TTL (e.g., up to five minutes).
Cache Breakdown
When a hot key expires, a sudden surge of concurrent requests may all hit the database, causing a spike that can crash the backend.
Solutions
Distribute expiration times by adding a random offset (e.g., 1‑5 minutes) to each key’s TTL, reducing the chance of simultaneous expirations.
Cache Avalanche
Avalanche happens when many keys share the same expiration time, causing a massive, simultaneous cache miss and overwhelming the database.
Solutions
1. Use a mutex (e.g., Redis SETNX ) to ensure only one request loads data from the DB and repopulates the cache. 2. Apply a “pre‑emptive” mutex with a shorter internal timeout than the actual cache TTL. 3. Adopt a “logical expiration” strategy: store the expiration timestamp inside the value and refresh the cache asynchronously, keeping the key technically permanent.
Summary
Penetration: non‑existent cache key, DB also missing, high concurrency, few keys. Breakdown: cache miss, DB has data, high concurrency, few keys. Avalanche: cache miss, DB has data, high concurrency, many keys. All three can be mitigated with rate‑limiting and mutex locks to protect the database.
Feel free to discuss, ask questions, or share your own insights.
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.