Backend Development 9 min read

Cache Strategies and Common Issues: Consistency, Penetration, and Avalanche

The article explains why excessive database reads become a performance bottleneck, introduces a cache layer between applications and MySQL, details common caching patterns such as Cache‑Aside, Read‑Through, Write‑Through and Write‑Behind, and discusses consistency, penetration, and avalanche problems along with practical mitigation techniques.

Architect's Guide
Architect's Guide
Architect's Guide
Cache Strategies and Common Issues: Consistency, Penetration, and Avalanche

In real‑world development, frequent disk reads from databases can become a performance bottleneck, especially under high access volume.

Typical system architecture places a cache layer between the business system and MySQL to reduce database pressure, as shown in the diagram below.

When data volume grows, adding a cache helps avoid excessive disk I/O, but real projects encounter several classic issues.

1. Cache‑Database Consistency Issues

Common caching mechanisms include Cache Aside, Read Through, Write Through, Write Behind Caching . The Cache‑Aside pattern works as follows:

Cache hit: Data is retrieved directly from the cache.

Cache miss: The system reads from the database, then populates the cache.

Cache update: After a write to the database, the corresponding cache entry is invalidated.

This pattern is widely used but can still produce stale data when a write occurs between a miss‑read and cache invalidation, as described in a Facebook research paper (https://www.usenix.org/system/files/conference/nsdi13/nsdi13-final170_update.pdf).

Read Through forces the application to always read from the cache; on a miss, the cache fetches from the database, updates itself, and returns the data. It simplifies code but requires custom plugins.

Write Through updates the cache first and then synchronously writes to the database, ensuring consistency when the cache is hit.

Write Behind writes data to the cache and asynchronously propagates changes to the database, reducing load but risking data loss if the cache node fails.

2. Cache Penetration

In high‑concurrency scenarios, many requests miss the cache and hit the database, overwhelming it. Solutions include caching null values for non‑existent keys and using Bloom filters to pre‑filter nonexistent keys before querying Redis or the database.

3. Cache Avalanche

When many cache entries expire simultaneously or a cache server restarts, a sudden surge of database queries can occur. Mitigation strategies include:

Using distributed locks to allow only one request to repopulate the cache.

Pre‑warming the cache before traffic spikes.

Staggering expiration times to avoid synchronized expiry.

Deploying master‑slave + Sentinel or Redis Cluster for high availability and sharding.

Combining local Ehcache with Hystrix for rate‑limiting and fallback to protect MySQL.

These techniques help maintain system stability under heavy load.

Author: Danny_idea (source: https://blog.csdn.net/Danny_idea/article/details/91347674). The content is provided for learning and research purposes only.

DatabaseBackend DevelopmentCachingCache ConsistencyCache AvalancheCache Penetration
Architect's Guide
Written by

Architect's Guide

Dedicated to sharing programmer-architect skills—Java backend, system, microservice, and distributed architectures—to help you become a senior architect.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.