Combining Redis with Local Guava Cache for Efficient Lazy Loading
This article explains how to integrate Redis with a Guava‑based local cache to achieve lazy‑loading of high‑frequency data, reduces Redis I/O pressure, provides code examples, discusses advantages and drawbacks, and outlines suitable microservice scenarios.
In many backend systems Redis is used as a cache to store high‑frequency data, improving performance and reducing pressure on relational databases, but heavy read/write traffic can still overload Redis.
To alleviate this, a local cache (Guava) can be combined with Redis, allowing queries to first check a fast in‑process cache and only fall back to Redis when necessary, thus minimizing I/O latency.
Design Example
The lazy‑loading cache pattern stores data in Redis only after a cache miss, ensuring that only queried items occupy cache space. The flow diagram illustrates the process of checking the local cache, then Redis, and finally the database.
Code Example
// 伪代码示例 Xx代表你的业务对象 如User Goods等等
public class XxLazyCache {
@Autowired
private RedisTemplate<String, Xx> redisTemplate;
@Autowired
private XxService xxService;// 你的业务service
/**
* 查询 通过查询缓存是否存在驱动缓存加载 建议在前置业务保证id对应数据是绝对存在于数据库中的
*/
public Xx getXx(int id) {
// 1.查询缓存里面有没有数据
Xx xxCache = getXxFromCache(id);
if(xxCache != null) {
return xxCache;// 卫语句使代码更有利于阅读
}
// 2.查询数据库获取数据 我们假定到业务这一步,传过来的id都在数据库中有对应数据
Xx xx = xxService.getXxById(id);
// 3.设置缓存、这一步相当于Redis缓存懒加载,下次再查询此id,则会走缓存
setXxFromCache(xx);
return xx;
}
/**
* 对xx数据进行修改或者删除操作 操作数据库成功后 删除缓存
*/
public void deleteXxFromCache(long id) {
String key = "Xx:" + xx.getId();
redisTemplate.delete(key);
}
private void setXxFromCache(Xx xx) {
String key = "Xx:" + xx.getId();
redisTemplate.opsForValue().set(key, xx);
}
private Xx getXxFromCache(int id) {
String key = "Xx:" + id;
return redisTemplate.opsForValue().get(key);
}
}
// 业务类
public class XxServie {
@Autowired
private XxLazyCache xxLazyCache;
public Xx getXxById(long id) {
// 省略实现
return xx;
}
public void updateXx(Xx xx) {
// 更新MySQL数据 省略
xxLazyCache.deleteXxFromCache(xx.getId());
}
public void deleteXx(long id) {
// 删除MySQL数据 省略
xxLazyCache.deleteXxFromCache(xx.getId());
}
}
// 实体类
@Data
public class Xx {
private Long id;
// ...省略
}The above Java implementation demonstrates a lazy‑loading cache where data is fetched from Redis only when not present in the local cache, and updates delete the corresponding Redis entry to keep consistency.
Advantages
Minimizes cache size by caching only exact‑query data, avoiding cold‑data memory waste.
Low intrusion on CRUD operations; cache invalidation occurs on delete/update.
Plug‑in architecture allows seamless upgrades without pre‑loading all data.
Disadvantages
Not suitable for unbounded data growth.
Less effective for global caching in microservice environments.
Microservice Scenario
In a streaming data pipeline, each device has a unique code; a Redis hash stores auto‑increment IDs generated via Redis INCR. The Guava local cache holds recently accessed device IDs, reducing Redis calls and improving throughput.
/**
* 此缓存演示如何结合redis自增数 hash 本地缓存使用进行设备自增数的生成、缓存、本地缓存
*/
public class DeviceIncCache {
/** 本地缓存 */
private Cache<String, Integer> localCache = CacheBuilder.newBuilder()
.concurrencyLevel(16)
.initialCapacity(1000)
.maximumSize(10000)
.expireAfterAccess(1, TimeUnit.HOURS)
.build();
@Autowired
private RedisTemplate<String, Integer> redisTemplate;
private static final String DEVICE_INC_COUNT = "device_inc_count";
private static final String DEVICE_INC_VALUE = "device_inc_value";
/** 获取设备自增数 */
public int getInc(String deviceCode){
Integer inc = localCache.get(deviceCode);
if(inc != null) { return inc; }
inc = (Integer)redisTemplate.opsForHash().get(DEVICE_INC_VALUE, deviceCode);
if(inc == null){
inc = redisTemplate.opsForValue().increment(DEVICE_INC_COUNT).intValue();
redisTemplate.opsForHash().put(DEVICE_INC_VALUE, deviceCode, inc);
}
localCache.put(deviceCode, inc);
return inc;
}
}This cache reduces Redis pressure while providing sub‑millisecond read latency for frequently accessed device IDs.
Conclusion
Local cache size is controllable with expiration policies.
Suitable for exact‑query scenarios and microservice architectures.
Data must be immutable for the cache to remain consistent.
Overall performance gains are significant when read‑heavy workloads are present.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
