How Easy-Cache Solves Distributed Cache Pain Points with Multi-Level Design

This article introduces Easy-Cache, a Spring-AOP based caching framework that provides annotation-driven, multi-level Redis and local cache with dynamic upgrade/downgrade, elastic expiration, and Lua-script-ensured data consistency, eliminating repetitive cache code and handling failures, cache penetration, breakdown, and eventual consistency challenges.

Sanyou's Java Diary
Sanyou's Java Diary
Sanyou's Java Diary
How Easy-Cache Solves Distributed Cache Pain Points with Multi-Level Design

1. Introduction

In distributed system development, cache problems are a persistent pain point: ensuring data consistency, handling Redis failures, and dealing with cache penetration, breakdown, and avalanche. Repeating similar cache handling code across projects wastes time and introduces bugs.

1.1 Core Idea

Easy-Cache implements a unified cache consistency solution inspired by RocksCache. Using Spring AOP, developers annotate methods to obtain cache capabilities without writing any cache logic. It supports Redis cluster and local secondary cache, offering multi-level dynamic upgrade/downgrade, fault tolerance, elastic expiration, and eventual consistency.

2. Core Implementation

2.1 Implementation Goal: Simple and Easy-to-Use Cache Tool

The goal is a minimally invasive cache tool. Spring AOP intercepts @Cacheable and @CacheUpdate annotations, applying cache logic automatically.

@Cacheable(clusterId = "cluster1", prefix = "user", keys = {"#userId"})
public User getUserById(Long userId) {
    return userRepository.findById(userId);
}

@UpdateCache(clusterId = "cluster1", prefix = "user", keys = {"#userId"})
public User update(User user) {
    return userRepository.update(user);
}

The aspect implements:

Cache query and update logic

Data consistency guarantees

Fault tolerance (prevent penetration, multi-level cache, automatic upgrade/downgrade)

2.2 Design Idea

The tool entry is an AOP interceptor that triggers a central scheduler to perform fault handling, cache query, result processing, and response.

Cache design flow diagram
Cache design flow diagram

Annotation-driven: Spring AOP intercepts @Cacheable and @CacheUpdate.

Unified scheduler: Handles all cache query/update logic.

Fault tolerance: Decorator pattern adds protection against cache penetration.

Multi-level cache: Redis + local cache with health monitoring and automatic upgrade/downgrade.

Elastic consistency: Lua scripts ensure atomic operations; configurable inconsistency window (default 1.5 s) for eventual consistency.

2.3 Cache Decision: Dynamic Multi-Level Upgrade/Downgrade

The default strategy prefers Redis; if Redis is unavailable, it falls back to local cache. A decision engine monitors failures and switches accordingly.

Cache decision flow diagram
Cache decision flow diagram

Request A goes through the decision engine and still prefers Redis.

Redis throws an exception; the fault manager increments the failure count.

When the failure threshold is reached, the cluster is marked unavailable and a probe task starts.

Request B sees the cluster as unavailable and directly uses local cache (cache downgrade).

After successful probe, the cluster is marked available and future requests upgrade back to Redis.

2.4 Data Consistency Guarantee Mechanism

Using Redis hash structures and Lua scripts, Easy-Cache guarantees eventual consistency.

value : actual data

lockInfo : lock status ('locked' or 'unLock')

unlockTime : lock expiration timestamp

owner : unique lock owner ID (used for distributed lock)

Lua script query logic returns different codes:

If data is empty and lock expired → NEED_QUERY (fetch from DB)

If data is empty and locked → NEED_WAIT (sleep then retry)

If data exists and locked → SUCCESS_NEED_QUERY (return cached data, async DB fetch)

If data exists and unlocked → SUCCESS (return cached data)

private static final String GET_SH =
    "local key = KEYS[1]
" +
    "local newUnlockTime = ARGV[1]
" +
    "local owner = ARGV[2]
" +
    "local currentTime = tonumber(ARGV[3])
" +
    "local value = redis.call('HGET', key, 'value')
" +
    "local unlockTime = redis.call('HGET', key, 'unlockTime')
" +
    "local lockOwner = redis.call('HGET', key, 'owner')
" +
    "local lockInfo = redis.call('HGET', key, 'lockInfo')
" +
    "if unlockTime and currentTime > tonumber(unlockTime) then
" +
    "    redis.call('HMSET', key, 'lockInfo', 'locked', 'unlockTime', newUnlockTime, 'owner', owner)
" +
    "    return {value, 'NEED_QUERY'}
" +
    "end
" +
    "if not value or value == '' then
" +
    "    if lockOwner and lockOwner ~= owner then
" +
    "        return {value, 'NEED_WAIT'}
" +
    "    end
" +
    "    redis.call('HMSET', key, 'lockInfo', 'locked', 'unlockTime', newUnlockTime, 'owner', owner)
" +
    "    return {value, 'NEED_QUERY'}
" +
    "end
" +
    "if lockInfo and lockInfo == 'locked' then 
" +
    "    return {value, 'SUCCESS_NEED_QUERY'}
" +
    "end
" +
    "return {value , 'SUCCESS'}";

Cache update logic always clears the lock and sets a new expiration (default 1.5 s). If the key is empty, it returns success with an empty value.

private static final String INVALID_SH =
    "local key = KEYS[1]
" +
    "local newUnlockTime = tonumber(ARGV[1])
" +
    "redis.call('HDEL', key, 'owner')
" +
    "local value = redis.call('HGET', key, 'value')
" +
    "redis.call('HSET', key, 'lockInfo', 'locked')
" +
    "if not value or value == '' then
" +
    "    return {true, 'EMPTY_VALUE_SUCCESS'}
" +
    "end
" +
    "if newUnlockTime > 0 then
" +
    "    redis.call('HSET', key, 'unlockTime', newUnlockTime)
" +
    "end
" +
    "return {'', 'SUCCESS'}";

2.5 Lua Script Preloading: Solving Overhead

2.5.1 Performance Overhead of Design

Storing lock information adds ~50 bytes per key, which is acceptable compared to the risk of inconsistency. Transmitting the full Lua script (~500 bytes) for each cache read creates significant network I/O overhead.

Memory overhead: extra lock fields per key.

Network I/O overhead: full script transfer each call.

In production, Easy-Cache uses the EVALSHA command, sending only the SHA1 hash (≈40 bytes) instead of the full script, reducing network traffic by about 92 %.

2.5.2 Lua Script Preloading

Lua script preloading flow diagram
Lua script preloading flow diagram

During service startup, the LuaShPublisher component automatically loads all predefined Lua scripts (get, set, unlock, invalidate) to Redis using SCRIPT LOAD, records the returned SHA1 values locally, and applies an exponential backoff retry strategy for transient failures.

3. Core Features

3.1 Distributed Lock Ensures Consistency

Atomic operations via Lua scripts.

Final consistency through distributed lock.

Performance optimization by preloading scripts at startup.

3.2 Multi-Level Cache Architecture

High availability: real-time health monitoring switches to local cache on Redis failure.

Smart upgrade: automatic promotion back to Redis when the cluster recovers.

3.3 Elastic Expiration Mechanism

Mark-delete (soft delete) instead of immediate removal.

Configurable expiration window (default 1.5 s) for eventual consistency; set to 0 s for real‑time consistency.

Guarantees data consistency between cache and database.

3.4 Annotation-Driven Simplified Design

One-line annotation replaces boilerplate cache code.

Low learning curve—developers only need to understand annotation parameters.

Uniform cache operation pattern across the codebase.

4. Conclusion

Easy-Cache addresses common cache pain points by providing a unified, annotation‑driven solution that eliminates repetitive code, prevents cache penetration and breakdown, ensures data consistency with Redis‑Hash and Lua‑based distributed locks, and maintains high availability through automatic downgrade and probe mechanisms.

Eliminates repetitive cache handling code.

Prevents cache penetration via empty‑value caching.

Avoids cache breakdown with distributed locks and mark‑delete.

Ensures data consistency through atomic Lua scripts.

Handles Redis downtime with automatic downgrade and health probing.

CacheDistributed Lockspring-aopLuaMulti-level Cache
Sanyou's Java Diary
Written by

Sanyou's Java Diary

Passionate about technology, though not great at solving problems; eager to share, never tire of learning!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.