Backend Development 9 min read

Cache Operations: Read, Write, Consistency Issues and Optimization Strategies

This article explains cache read and write mechanisms, the impact of operation order on data consistency, and presents optimization techniques such as delayed double deletion and binlog subscription to mitigate inconsistency in high‑concurrency backend systems.

Architecture Digest
Architecture Digest
Architecture Digest
Cache Operations: Read, Write, Consistency Issues and Optimization Strategies

Cache operations are a core technique for improving system performance. This article introduces how to read from cache (cache hit vs. cache miss) and how to write to cache (updating or deleting cached data).

Read Cache

A read can result in two situations:

Cache hit : the data exists in the cache, it is fetched and returned directly.

Cache miss : the data is not in the cache, so the system retrieves it from the database, writes it into the cache, and then returns it.

Write Cache

Writing can be divided into update cache and delete cache (also called cache eviction).

Update Cache

Updates differ for simple data types (e.g., strings) and complex data types (e.g., hashes). Updating a simple type can be done directly, while updating a complex type usually requires four steps:

Fetch the data from the cache.

Deserialize it into an object.

Modify the object.

Serialize the object and store it back into the cache.

Because each write incurs extra computation, it is often better to defer cache updates until a cache miss occurs during a read.

Delete Cache

Cache deletion (eviction) simply removes the entry from the cache store.

Cache Operation Order

Cache is usually used together with a database, and the order of operations (DB → cache or cache → DB) can lead to data inconsistency and concurrency problems.

Database First, Then Cache

If the database write succeeds but the subsequent cache operation fails, the database holds the new value while the cache still contains the old value, causing stale reads.

String name = "arch-digest";

Updating the value to juejin with the "DB‑then‑cache" order:

public void update(String name) {
    db.insert(...); // update DB
    cache.delete(name); // delete cache entry
}

If cache.delete(name) fails, subsequent reads will return the outdated cached value.

Cache First, Then Database

This order can cause both inconsistency and concurrency issues. Updating the cache first makes the cache contain the new value while the database still has the old one; deleting the cache first avoids inconsistency but may lead to race conditions when multiple threads operate.

Data‑Consistency Optimization Strategies

Because perfect consistency is hard in distributed systems, three practical approaches are commonly used:

Do nothing – accept temporary inconsistency when the business allows it.

Delayed double delete – after updating the database, delete the cache, wait a short period (e.g., 1 s), then delete the cache again to eliminate stale data.

Binlog subscription – listen to MySQL binlog events (using tools like Canal) and push updates to Redis, optionally via message queues such as Kafka or RabbitMQ.

Delayed double delete can be implemented synchronously (adding a sleep) or asynchronously (spawning a background thread) to reduce latency impact.

Binlog‑based synchronization works similarly to MySQL master‑slave replication, ensuring that cache updates follow database changes.

backendoptimizationCachedata consistencyread-throughWrite Through
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.