Mastering Distributed Caching: Easy-Cache’s Multi‑Level Dynamic Upgrade and Consistency
This article introduces Easy-Cache, a Spring‑AOP based caching framework that eliminates repetitive cache code by offering annotation‑driven operations, multi‑level Redis and local caches, dynamic upgrade/downgrade, elastic expiration, and Lua‑script‑backed consistency mechanisms for high‑availability distributed systems.
1. Introduction
In distributed system development, caching is a persistent pain point: ensuring data consistency, handling Redis failures, and mitigating cache penetration, breakdown, and avalanche. Repeating similar cache handling code across projects wastes time and introduces errors.
1.1 Core Idea
To free developers from repetitive cache code and let them focus on business logic, Easy-Cache implements a unified cache consistency solution inspired by RocksCache. Using Spring AOP, it provides simple annotation‑driven cache operations, supports Redis cluster and local secondary cache, and offers features such as multi‑level dynamic upgrade/downgrade, fault tolerance, elastic expiration, and eventual consistency guarantees. Developers only need to set the appropriate annotation parameters.
2. Core Implementation
2.1 Goal: Simple, Low‑Intrusion Cache Tool
The goal is a cache tool that is easy to use and minimally invasive. Spring AOP intercepts @Cacheable and @CacheUpdate annotations, applying cache logic without requiring developers to write any cache code.
@Cacheable(clusterId = "cluster1", prefix = "user", keys = {"#userId"})
public User getUserById(Long userId) {
return userRepository.findById(userId);
}
@UpdateCache(clusterId = "cluster1", prefix = "user", keys = {"#userId"})
public User update(User user) {
return userRepository.update(user);
}The aspect implements cache query and update logic, ensures data consistency, and provides fault tolerance (preventing penetration, multi‑level cache, automatic upgrade/downgrade).
2.2 Design Approach
The tool’s entry point is an AOP interceptor that processes the annotations, then a central scheduler performs fault handling, cache query, result processing, and response. The workflow is illustrated below:
Annotation‑driven: Spring AOP intercepts @Cacheable and @CacheUpdate to trigger cache query and update.
Unified scheduling: The scheduler handles all cache logic.
Fault tolerance: Decorator pattern adds fault‑handling to prevent cache penetration.
Multi‑level cache: Redis + local cache ensures high availability; health monitoring enables automatic downgrade/upgrade.
Elastic consistency: Lua scripts guarantee atomic operations; a configurable inconsistency window (default 1.5 s) provides eventual consistency, with 0 s for real‑time consistency.
2.3 Cache Decision: Dynamic Multi‑Level Upgrade/Downgrade
The default strategy prefers Redis; if Redis is unavailable, it falls back to local cache. A decision engine monitors failure events and switches between caches, as shown:
Request A checks the decision engine; if failure count is below threshold, it queries Redis.
Redis failure triggers an exception event; the fault manager increments the failure count.
When the threshold is reached, the cluster is marked unavailable and a health‑check task starts.
Request B sees the cluster as unavailable and directly uses the local cache (downgrade).
After successful health checks, the cluster is marked available again and requests upgrade back to Redis.
2.4 Data Consistency Guarantee Mechanism
Using a Redis‑Hash structure and Lua scripts, Easy‑Cache ensures eventual consistency. Each cache entry stores value , lockInfo (locked/unLock), unlockTime , and owner (unique lock ID). The Lua script implements the following logic:
If data is empty and lock expired → lock the key and return NEED_QUERY to fetch from DB.
If data is empty and locked → return NEED_WAIT and retry after 100 ms.
If data exists and locked → return SUCCESS_NEED_QUERY with cached data, then async DB fetch.
If data exists and unlocked → return SUCCESS with cached data.
private static final String GET_SH =
"local key = KEYS[1]
" +
"local newUnlockTime = ARGV[1]
" +
"local owner = ARGV[2]
" +
"local currentTime = tonumber(ARGV[3])
" +
"local value = redis.call('HGET', key, 'value')
" +
"local unlockTime = redis.call('HGET', key, 'unlockTime')
" +
"local lockOwner = redis.call('HGET', key, 'owner')
" +
"local lockInfo = redis.call('HGET', key, 'lockInfo')
" +
"if unlockTime and currentTime > tonumber(unlockTime) then
" +
" redis.call('HMSET', key, 'lockInfo', 'locked', 'unlockTime', newUnlockTime, 'owner', owner)
" +
" return {value, 'NEED_QUERY'}
" +
"end
" +
"if not value or value == '' then
" +
" if lockOwner and lockOwner ~= owner then
" +
" return {value, 'NEED_WAIT'}
" +
" end
" +
" redis.call('HMSET', key, 'lockInfo', 'locked', 'unlockTime', newUnlockTime, 'owner', owner)
" +
" return {value, 'NEED_QUERY'}
" +
"end
" +
"if lockInfo and lockInfo == 'locked' then
" +
" return {value, 'SUCCESS_NEED_QUERY'}
" +
"end
" +
"return {value, 'SUCCESS'}";When updating the cache, another Lua script clears the lock and sets the new value, ensuring atomicity.
private static final String INVALID_SH =
"local key = KEYS[1]
" +
"local newUnlockTime = tonumber(ARGV[1])
" +
"redis.call('HDEL', key, 'owner')
" +
"local value = redis.call('HGET', key, 'value')
" +
"redis.call('HSET', key, 'lockInfo', 'locked')
" +
"if not value or value == '' then
" +
" return {true, 'EMPTY_VALUE_SUCCESS'}
" +
"end
" +
"if newUnlockTime > 0 then
" +
" redis.call('HSET', key, 'unlockTime', newUnlockTime)
" +
"end
" +
"return {'', 'SUCCESS'}";2.4.1 Consistency in Read‑Read Concurrency
Thread A acquires the lock, queries the DB, updates the cache, and releases the lock. Thread B, seeing the lock, waits and then reads the fresh cache value, ensuring only one DB query.
2.4.2 Consistency in Read‑Write Concurrency
If a write occurs while a read holds the lock, the write forces lock removal and marks the key as locked, preventing the stale read from overwriting the fresh value.
2.5 Lua Script Pre‑Loading: Reducing Overhead
2.5.1 Performance Cost
Embedding Lua scripts in Redis guarantees atomicity but adds memory (≈50 bytes per key) and network I/O overhead (≈500 bytes per script when using EVAL).
2.5.2 Pre‑Loading Mechanism
At service startup, the LuaShPublisher component loads all required scripts via EVALSHA, storing the SHA1 hashes locally. It includes retry logic with exponential back‑off to handle temporary Redis unavailability.
3. Core Features
3.1 Distributed Lock for Consistency
Atomic operations via Lua scripts.
Eventual consistency through distributed locking.
Performance optimization by pre‑loading scripts.
3.2 Multi‑Level Cache Architecture
High availability: automatic failover to local cache when Redis is down.
Smart upgrade: automatic switch back to Redis after recovery.
3.3 Elastic Expiration Mechanism
Soft delete via marking keys as expired.
Configurable expiration window (default 1.5 s) for eventual consistency; set to 0 s for real‑time consistency.
Ensures cache‑DB consistency.
3.4 Annotation‑Driven Simplified Design
One‑line annotation replaces boilerplate cache code.
Low learning curve; developers only need to understand annotation parameters.
Uniform cache operation standards.
4. Conclusion
Easy‑Cache addresses common caching challenges in distributed systems by providing annotation‑driven, multi‑level, fault‑tolerant, and consistently synchronized caching. It eliminates repetitive code, mitigates cache penetration, breakdown, and inconsistency, and ensures high availability through automatic downgrade and health‑check mechanisms.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Zhuanzhuan Tech
A platform for Zhuanzhuan R&D and industry peers to learn and exchange technology, regularly sharing frontline experience and cutting‑edge topics. We welcome practical discussions and sharing; contact waterystone with any questions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
