Handling Redis Cache Penetration, Avalanche, and Breakdown in High‑Concurrency Scenarios
This article explains the four common Redis cache pitfalls—cache penetration, avalanche, breakdown, and data inconsistency—demonstrates how they can crash high‑traffic systems, and provides practical Java/Spring Boot solutions such as empty‑object caching, Bloom filters, distributed locks, and delayed double‑delete strategies.
When developing or preparing for interviews, Redis‑related scenarios inevitably involve four special cases: cache penetration, cache avalanche, cache breakdown, and data consistency. Ignoring these in high‑concurrency environments can cause system crashes or data corruption.
Cache Penetration
Cache penetration occurs when a request queries a key that does not exist in both Redis and the database, forcing every request to hit the database.
@Slf4j
@Service
public class DocumentInfoServiceImpl extends ServiceImpl
implements DocumentInfoService {
@Resource
private StringRedisTemplate stringRedisTemplate;
@Override
public DocumentInfo getDocumentDetail(int docId) {
String redisKey = "doc::info::" + docId;
String obj = stringRedisTemplate.opsForValue().get(redisKey);
DocumentInfo documentInfo = null;
if (StrUtil.isNotEmpty(obj)) {
log.info("==== select from cache ====");
documentInfo = JSONUtil.toBean(obj, DocumentInfo.class);
} else {
log.info("==== select from db ====");
documentInfo = this.lambdaQuery().eq(DocumentInfo::getId, docId).one();
if (ObjectUtil.isNotNull(documentInfo)) {
stringRedisTemplate.opsForValue().set(redisKey, JSONUtil.toJsonStr(documentInfo), 5L, TimeUnit.SECONDS);
}
}
return documentInfo;
}
}Solution 1: cache an empty object when the queried data does not exist, setting a short TTL to avoid repeated DB hits.
// query object does not exist
if (StrUtil.equals(obj, "")) {
log.info("==== select from cache, data not available ====");
return null;
}
if (StrUtil.isNotEmpty(obj)) {
log.info("==== select from cache ====");
documentInfo = JSONUtil.toBean(obj, DocumentInfo.class);
} else {
log.info("==== select from db ====");
documentInfo = this.lambdaQuery().eq(DocumentInfo::getId, docId).one();
// cache empty object with TTL if data is missing
stringRedisTemplate.opsForValue().set(redisKey, ObjectUtil.isNotNull(documentInfo) ? JSONUtil.toJsonStr(documentInfo) : "", 5L, TimeUnit.SECONDS);
}Solution 2: use a Bloom filter to pre‑filter nonexistent keys, reducing DB pressure.
/** Bloom filter add pseudo‑code */
BitArr[] bit = new BitArr[10000];
List
insertData = Arrays.asList("A", "B", "C");
for (String insertDatum : insertData) {
for (int i = 1; i <= 3; i++) {
int bitIdx = hash_i(insertDatum);
bit[bitIdx] = 1;
}
}Implementation can rely on Guava's BloomFilter or Hutool's BitMapBloomFilter :
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>21.0</version>
</dependency> public static BloomFilter
localBloomFilter = BloomFilter.create(Funnels.integerFunnel(), 10000L, 0.01);Cache Breakdown
When a hot key expires, all concurrent requests fall back to the database, potentially overwhelming it.
Solution 1: keep hot data without TTL and update the cache synchronously with DB changes.
Solution 2: use a mutex (local synchronized or distributed lock) so that only one thread queries the DB while others wait for the cache to be populated.
@Component
public class RedisLockUtil {
@Resource
private StringRedisTemplate stringRedisTemplate;
public boolean tryLock(String key, String value, long exp) {
Boolean absent = stringRedisTemplate.opsForValue().setIfAbsent(key, value, exp, TimeUnit.SECONDS);
if (absent) return true;
return tryLock(key, value, exp);
}
public void unLock(String key, String value) {
String s = stringRedisTemplate.opsForValue().get(key);
if (StrUtil.equals(s, value)) {
stringRedisTemplate.delete(key);
}
}
}Cache Avalanche
Mass expiration of keys with identical TTL leads to a sudden DB surge.
Solution 1: add a random offset to each key’s TTL.
int randomInt = RandomUtil.randomInt(2, 10);
stringRedisTemplate.opsForValue().set(redisKey, JSONUtil.toJsonStr(documentInfo), 5L + randomInt, TimeUnit.SECONDS);Solution 2: avoid setting TTL for data that can tolerate eventual consistency.
Solution 3: deploy a highly available Redis cluster to mitigate single‑node failures.
Data Consistency Strategies
Four typical update patterns are discussed: cache‑first‑then‑DB, DB‑first‑then‑cache, cache‑delete‑then‑DB, and DB‑then‑cache‑delete. Each has pitfalls under concurrency.
Recommended approach: delayed double delete.
@Data
public class DoubleDeleteTask implements Delayed {
private String key; // key to delete
private long time; // execution time (now + delay)
public DoubleDeleteTask(String key, long delay) {
this.key = key;
this.time = delay + System.currentTimeMillis();
}
@Override
public long getDelay(TimeUnit unit) {
return unit.convert(time - System.currentTimeMillis(), TimeUnit.MILLISECONDS);
}
@Override
public int compareTo(Delayed o) {
return Long.compare(time, ((DoubleDeleteTask) o).time);
}
}A DelayQueue<DoubleDeleteTask> is configured as a Spring bean, and a background thread consumes tasks, performing the second delete after a configurable delay (e.g., 2 seconds) and retrying on failure.
@Component
public class DoubleDeleteTaskRunner implements CommandLineRunner {
@Resource
private DelayQueue
doubleDeleteQueue;
@Resource
private StringRedisTemplate stringRedisTemplate;
private static final int RETRY_COUNT = 3;
@Override
public void run(String... args) throws Exception {
new Thread(() -> {
try {
while (true) {
DoubleDeleteTask task = doubleDeleteQueue.take();
String key = task.getKey();
try {
stringRedisTemplate.delete(key);
log.info("==== delayed delete key:{} ====", key);
} catch (Exception e) {
// retry logic omitted for brevity
}
}
} catch (Exception e) {
e.printStackTrace();
}
}, "double-delete-task").start();
}
}When updating a document, the service first deletes the cache, updates the DB, and then enqueues a DoubleDeleteTask to remove any stale cache that might have been written by concurrent reads.
public boolean updateDocument(DocumentInfo documentInfo) {
String redisKey = "doc::info::" + documentInfo.getId();
// delete cache first
stringRedisTemplate.delete(redisKey);
// update DB
boolean b = this.updateById(documentInfo);
// enqueue delayed delete to guarantee consistency
doubleDeleteQueue.add(new DoubleDeleteTask(redisKey, 2000L));
return b;
}By combining empty‑object caching, Bloom filters, mutex/locks, random TTLs, and delayed double deletes, the article provides a comprehensive toolkit for preventing Redis‑related cache failures in high‑traffic Java/Spring Boot applications.
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.