10 Cache Governance Rules Every Backend Engineer Should Follow

This article shares ten practical cache governance rules—from avoiding large keys and setting proper TTLs to using distributed locks and final consistency strategies—illustrated with real‑world Java examples, code snippets, and diagrams to help backend developers design reliable, high‑performance caching solutions.

Su San Talks Tech
Su San Talks Tech
Su San Talks Tech
10 Cache Governance Rules Every Backend Engineer Should Follow

Introduction

When a production incident shows cache hit rate at 0%, database QPS soaring, and thread blockage, the root cause is often a missing cache layer in the code. Direct DB access without caching can cripple performance, so mastering cache best practices is essential.

Rule 1: Avoid Large Keys

Bad example: Caching an entire user object with all relations can cause frequent GC under high request volume.

@Cacheable(value = "user", key = "#id")
public User getUser(Long id) {
    return userDao.findWithAllRelations(id);
}

Correct practice: Split the cache into smaller objects, e.g., base info and detailed info.

@Cacheable(value = "user_base", key = "#id")
public UserBase getBaseInfo(Long id) { /*...*/ }

@Cacheable(value = "user_detail", key = "#id")
public UserDetail getDetailInfo(Long id) { /*...*/ }

Large cache entries lead to memory fragmentation and full GC; store only frequently accessed fields and keep heavy data separate.

Rule 2: Always Set Expiration Time

Failure case: A configuration cache set to never expire caused changes to take three days to become effective.

@Cacheable(value = "config", key = "#key")
public String getConfig(String key) {
    return configDao.get(key);
}

Configure Redis TTL, e.g., 5 minutes:

spring.cache.redis.time-to-live=300000   # 5 minutes
spring.cache.redis.cache-null-values=false

Optimal TTL ≈ average data change period × 0.3. Too short leads to cache‑penetration; too long causes stale data. Use dynamic TTL with random jitter.

Recommend a dynamic TTL strategy.

Example: product detail page TTL = 30 min + random 0‑5 min.

Rule 3: Avoid Bulk Expiration

Setting the same TTL for all keys can cause a “thundering herd” when they expire simultaneously, overwhelming the DB.

Solution: add random jitter to the base TTL.

public long randomTtl(long baseTtl) {
    return baseTtl + new Random().nextInt(300); // +0‑5 min
}

redisTemplate.opsForValue().set(key, value, randomTtl(1800), TimeUnit.SECONDS);

Result: expiration times are distributed.

Rule 4: Add Circuit‑Breaker/Fallback

Cache failures should not bring down the service. Use Hystrix (or Sentinel) to provide a fallback.

@HystrixCommand(fallbackMethod = "getProductFallback",
    commandProperties = {
        @HystrixProperty(name = "circuitBreaker.requestVolumeThreshold", value = "20"),
        @HystrixProperty(name = "circuitBreaker.sleepWindowInMilliseconds", value = "5000")
    })
public Product getProduct(Long id) {
    return productDao.findById(id);
}

public Product getProductFallback(Long id) {
    return new Product().setDefault(); // return default data
}

Rule 5: Cache Empty Values

When a key does not exist, cache a placeholder to avoid repeated DB hits.

public Product getProduct(Long id) {
    String key = "product:" + id;
    Product product = redis.get(key);
    if (product != null) {
        if (product.isEmpty()) return null; // empty marker
        return product;
    }
    product = productDao.findById(id);
    if (product == null) {
        redis.setex(key, 300, "empty"); // cache empty for 5 min
        return null;
    }
    redis.setex(key, 3600, product);
    return product;
}

Rule 6: Use Redisson for Distributed Locks

Redisson provides a reliable lock implementation to prevent cache stampede.

public Product getProduct(Long id) {
    String key = "product:" + id;
    Product product = redis.get(key);
    if (product == null) {
        RLock lock = redisson.getLock("lock:" + key);
        try {
            if (lock.tryLock(3, 30, TimeUnit.SECONDS)) {
                product = productDao.findById(id);
                redis.setex(key, 3600, product);
            }
        } finally {
            lock.unlock();
        }
    }
    return product;
}

Rule 7: Delayed Double‑Delete

To keep DB and cache consistent, delete the cache, update the DB, then delete the cache again after a short delay.

@Transactional
public void updateProduct(Product product) {
    // 1. delete cache
    redis.delete("product:" + product.getId());
    // 2. update DB
    productDao.update(product);
    // 3. delayed delete
    executor.schedule(() -> {
        redis.delete("product:" + product.getId());
    }, 500, TimeUnit.MILLISECONDS);
}

Rule 8: Final Consistency via Binlog

After DB commit, Canal captures the binlog, publishes an MQ message, and consumers delete the related cache entry.

Rule 9: Hot‑Data Pre‑Loading

Use Redis HyperLogLog to track access frequency and preload hot keys.

// record access
public void recordAccess(Long productId) {
    String key = "access:product:" + productId;
    redis.pfadd(key, UUID.randomUUID().toString());
    redis.expire(key, 60); // last 60 s
}

// detect hot keys every 10 s
@Scheduled(fixedRate = 10000)
public void detectHotKeys() {
    Set<String> keys = redis.keys("access:product:*");
    keys.forEach(key -> {
        long count = redis.pfcount(key);
        if (count > 1000) { // threshold
            Long productId = extractId(key);
            preloadProduct(productId);
        }
    });
}

Rule 10: Choose the Right Redis Data Structure

Different use‑cases require different structures:

String : counters, simple locks.

Hash : store object fields for partial updates.

List : message queues, recent N records.

Set : tag systems, set intersections (e.g., mutual friends).

ZSet : leaderboards, delayed queues.

Summary of Golden Rules

Problem: Cache Penetration — Recommendation: Cache empty values + Bloom filter — Tool: Redisson BloomFilter

Problem: Cache Avalanche — Recommendation: Random TTL + circuit breaker — Tool: Hystrix/Sentinel

Problem: Cache Stampede — Recommendation: Mutex lock + hot‑data preload — Tool: Redisson Lock

Problem: Data Consistency — Recommendation: Delayed double‑delete + final consistency — Tool: Canal + RocketMQ

Final Advice

Cache is a double‑edged sword: used well it boosts performance, misused it becomes a ticking bomb. Before adding a cache layer, ask yourself:

Do I really need caching?

Is the cache solution complete?

Do I have fallback measures?

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

backendRediscachingTTLdistributed-lock
Su San Talks Tech
Written by

Su San Talks Tech

Su San, former staff at several leading tech companies, is a top creator on Juejin and a premium creator on CSDN, and runs the free coding practice site www.susan.net.cn.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.