Backend Development 12 min read

Understanding Redis Cache Penetration, Breakdown, and Avalanche: Concepts and Solutions

This article explains the three major Redis cache problems—cache penetration, cache breakdown, and cache avalanche—describes why they occur, and provides practical mitigation techniques such as cache‑null objects, Bloom filters, locking, high‑availability, rate limiting, data pre‑warming, and staggered expiration.

Top Architect
Top Architect
Top Architect
Understanding Redis Cache Penetration, Breakdown, and Avalanche: Concepts and Solutions

Redis is widely used as a caching layer, but developers must be aware of three critical issues that can degrade performance or even crash the system: cache penetration, cache breakdown, and cache avalanche.

Cache Penetration

Cache penetration happens when a request queries a key that does not exist in both the cache and the database, causing repeated database hits and heavy load. A typical example is a request for id = -1 , which never exists.

Two common solutions are:

Cache Null Object

When the database returns no data, store a placeholder (null object) in Redis with a short TTL. Subsequent requests hit the cache and avoid database queries.

setex key seconds value   // set key with expiration time (seconds)

In Java you can use:

redisCache.put(Integer.toString(id), null, 60) // expires in 60 seconds

Bloom Filter

A Bloom filter is a probabilistic data structure that quickly tells whether an element is possibly in a set. It dramatically reduces unnecessary database queries, at the cost of a small false‑positive rate.

Large bit array (0/1)

Multiple hash functions

High space‑ and query‑efficiency

No delete operation (makes maintenance harder)

Accuracy improves with a larger bit array and more hash functions.

Cache Breakdown

Cache breakdown occurs when a hot key expires and a massive burst of requests simultaneously hit the database, overwhelming it.

Typical scenarios:

A rarely accessed key suddenly receives a flood of requests.

A popular key expires exactly when many users request it.

The common mitigation is to use locking: when the key is missing, only the first request acquires a lock, fetches the data from the database, writes it back to Redis, and releases the lock; other requests wait for the cache to be populated.

In distributed environments, use distributed locks based on Redis, Zookeeper, or a database.

Cache Avalanche

Cache avalanche refers to a large number of keys expiring at the same time, causing a sudden surge of database traffic that can lead to crashes.

Root causes include Redis downtime or mass expiration of cached data.

Effective countermeasures:

Deploy Redis in high‑availability mode (master‑slave or multi‑master clusters).

Apply rate limiting and degrade gracefully.

Use locks or queues to limit concurrent database reads for the same key.

Pre‑warm hot data before traffic spikes.

Assign different TTLs to keys to spread expirations over time.

By understanding these three cache problems and applying the above strategies, developers can keep Redis‑backed services stable and performant.

backendCacheRedisBloom FilterCache AvalancheCache BreakdownCache Penetration
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.