5 Common Redis Cache Anti‑Patterns and How to Fix Them

This article examines five frequent Redis cache anti‑patterns—cache avalanche, unbounded local cache, stale data, missing invalidation, and oversized objects—explaining their pitfalls with code examples and showing concrete fixes that dramatically improve latency, throughput, and memory usage.

Spring Full-Stack Practical Cases
Spring Full-Stack Practical Cases
Spring Full-Stack Practical Cases
5 Common Redis Cache Anti‑Patterns and How to Fix Them

1. Introduction

Redis is a high‑performance in‑memory data store often used as a cache to speed up reads and reduce database load. However, misuse can become a performance bottleneck. This article presents five frequent Redis cache anti‑patterns and provides concise fixes with code examples.

2. Cache Anti‑Patterns

2.1 Cache avalanche

When a hot key expires, many requests simultaneously rebuild it, overwhelming the data store, causing connection‑pool saturation and high p99 latency.

Architecture snapshot

Problem code

// Simple cache‑aside without coordination
String key = "product:" + id;
String v = cache.get(key);
if (v == null) {
    v = db.loadProductJson(id); // time‑consuming
    cache.set(key, v, 60); // TTL 60 seconds
}
return v;

Issue : Every thread queries the DB after expiration, causing a thundering‑herd effect.

Fix : Single‑flight lock with random jitter.

private final ConcurrentHashMap<String, ReentrantLock> locks = new ConcurrentHashMap<>();
String key = "product:" + id;
String v = cache.get(key);
if (v != null) return v;
ReentrantLock lock = locks.computeIfAbsent(key, k -> new ReentrantLock());
lock.lock();
try {
    v = cache.get(key);
    if (v == null) {
        v = db.loadProductJson(id);
        int ttl = 60 + ThreadLocalRandom.current().nextInt(0, 15);
        cache.set(key, v, ttl);
    }
    return v;
} finally {
    lock.unlock();
    locks.remove(key, lock);
}

Benchmark: before fix DB reads = 18,000 /min, p99 = 720 ms; after fix DB reads = 2,100 /min, p99 = 210 ms.

2.2 Unlimited local cache

Using an unbounded Caffeine cache causes memory explosion and long GC pauses under burst traffic.

Problem code

// Caffeine without max size
Cache<String, String> c = Caffeine.newBuilder()
    .expireAfterWrite(Duration.ofMinutes(10))
    .build();

Fix : Set maximum weight, define weigher, enable stats.

Cache<String, String> c = Caffeine.newBuilder()
    .maximumWeight(200 * 1024 * 1024) // ~200 MB
    .weigher((k, v) -> v.length())
    .expireAfterWrite(Duration.ofMinutes(10))
    .recordStats()
    .build();

Benchmark: before fix p99 = 480 ms, GC pause p99 = 140 ms, memory = 2.8 GB; after fix p99 = 165 ms, GC pause p99 = 28 ms, memory = 1.1 GB.

2.3 Stale cache data

Caching user information without versioning leads to outdated reads when concurrent writes occur.

Problem code

String key = "user:" + userId;
db.updateUser(u);
cache.set(key, serialize(u), 300);

Fix A : Embed version in the cache key.

int ver = db.readUserVersion(userId);
String key = "user:v" + ver + ":" + userId;
String v = cache.get(key);
if (v == null) {
    User u = db.loadUser(userId);
    cache.set(key, serialize(u), 300);
    v = serialize(u);
}
return deserialize(v);

Fix B : CAS update and invalidate the old key.

User cur = db.loadUser(userId);
boolean ok = db.updateUserIfVersion(userId, cur.version, newUser);
if (ok) {
    cache.del("user:" + userId);
}

Benchmark: stale‑read rate dropped from 7.4 % to 0.2 %.

2.4 Cache‑aside without invalidation

Read uses cache‑aside, but write only updates the database, leaving the cache stale.

Problem code

// Read
String v = cache.get(k);
if (v == null) {
    v = db.load(k);
    cache.set(k, v, 300);
}

// Write
db.save(k, vNew); // no cache update or eviction

Fix : Write‑through or Pub/Sub invalidation.

// Write‑through
db.save(k, vNew);
cache.set(k, vNew, 300);
// Pub/Sub invalidation
db.save(k, vNew);
redis.publish("invalidate", k);
// each instance subscribes and deletes the key
redis.subscribe("invalidate", msg -> cache.del(msg));

Benchmark: inconsistency window reduced to <50 ms.

2.5 Caching oversized objects

Caching whole JSON payloads (e.g., order history) wastes CPU and bandwidth when only a few fields are needed.

Problem code

String json = cache.get(key);
if (json == null) {
    json = db.loadOrdersAsJson(userId);
    cache.set(key, json, 600);
}
Orders o = mapper.readValue(json, Orders.class);
return o.lastFive();

Fix : Cache only the required slice in a compact binary format.

String key = "orders:last5:" + userId;
byte[] buf = cache.getBytes(key);
List<Order> last5;
if (buf != null) {
    last5 = deserialize(buf);
} else {
    last5 = db.loadLast5(userId);
    cache.setBytes(key, serialize(last5), 600);
}
return last5;

Benchmark: CPU per core dropped from 22 % to 7 %, p95 latency from 190 ms to 78 ms, traffic from 2.1 MB to 52 KB.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

JavaperformancecacheRedisSpring BootCaffeineAnti‑Pattern
Spring Full-Stack Practical Cases
Written by

Spring Full-Stack Practical Cases

Full-stack Java development with Vue 2/3 front-end suite; hands-on examples and source code analysis for Spring, Spring Boot 2/3, and Spring Cloud.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.