Implementing a Two‑Level Cache with Caffeine and Redis in Spring Boot
This article explains how to design and implement a two‑level caching architecture in Spring Boot by combining a local Caffeine cache with a remote Redis cache, covering manual approaches, annotation‑driven management with Spring Cache, and a custom AOP solution to minimize code intrusion.
In high‑performance service architectures, caching is essential; hot data is first stored in a remote cache such as Redis or MemCache , and only when a cache miss occurs is the database queried.
When remote caches become insufficient, a local cache (e.g., Guava or Caffeine ) can be added as a first‑level cache, forming a two‑level cache architecture that further reduces latency.
V1.0 – Manual implementation – Define a Caffeine bean, configure dependencies, and write explicit cache logic in the service layer. Example configuration:
@Configuration
public class CaffeineConfig {
@Bean
public Cache
caffeineCache() {
return Caffeine.newBuilder()
.initialCapacity(128)
.maximumSize(1024)
.expireAfterWrite(60, TimeUnit.SECONDS)
.build();
}
}POM dependencies:
<dependency>
<groupId>com.github.ben-manes.caffeine</groupId>
<artifactId>caffeine</artifactId>
<version>2.9.2</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>Application.yml Redis connection:
spring:
redis:
host: 127.0.0.1
port: 6379
database: 0
timeout: 10000ms
lettuce:
pool:
max-active: 8
max-wait: -1ms
max-idle: 8
min-idle: 0Service method with manual two‑level cache handling:
public Order getOrderById(Long id) {
String key = CacheConstant.ORDER + id;
Order order = (Order) cache.get(key, k -> {
Object obj = redisTemplate.opsForValue().get(k);
if (obj != null) {
log.info("get data from redis");
return obj;
}
log.info("get data from database");
Order dbOrder = orderMapper.selectOne(new LambdaQueryWrapper
().eq(Order::getId, id));
redisTemplate.opsForValue().set(k, dbOrder, 120, TimeUnit.SECONDS);
return dbOrder;
});
return order;
}V2.0 – Spring Cache annotations – Enable caching and use @Cacheable , @CachePut , and @CacheEvict to let Spring manage the cache, reducing manual code. Example:
@Cacheable(value = "order", key = "#id")
public Order getOrderById(Long id) { /* business logic */ }
@CachePut(cacheNames = "order", key = "#order.id")
public Order updateOrder(Order order) { /* business logic */ }
@CacheEvict(cacheNames = "order", key = "#id")
public void deleteOrder(Long id) { /* business logic */ }Cache manager configuration for Caffeine:
@Configuration
public class CacheManagerConfig {
@Bean
public CacheManager cacheManager() {
CaffeineCacheManager manager = new CaffeineCacheManager();
manager.setCaffeine(Caffeine.newBuilder()
.initialCapacity(128)
.maximumSize(1024)
.expireAfterWrite(60, TimeUnit.SECONDS));
return manager;
}
}V3.0 – Custom annotation with AOP – Define @DoubleCache to describe cache name, key (SpringEL), timeout, and operation type, then implement an aspect that parses the key, checks Caffeine first, then Redis, and finally the database.
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface DoubleCache {
String cacheName();
String key();
long l2TimeOut() default 120;
CacheType type() default CacheType.FULL;
}Aspect handling the annotation:
@Component
@Aspect
@AllArgsConstructor
public class CacheAspect {
private final Cache cache;
private final RedisTemplate redisTemplate;
@Pointcut("@annotation(com.cn.dc.annotation.DoubleCache)")
public void cacheAspect() {}
@Around("cacheAspect()")
public Object doAround(ProceedingJoinPoint point) throws Throwable {
MethodSignature sig = (MethodSignature) point.getSignature();
Method method = sig.getMethod();
String[] paramNames = sig.getParameterNames();
Object[] args = point.getArgs();
TreeMap
map = new TreeMap<>();
for (int i = 0; i < paramNames.length; i++) {
map.put(paramNames[i], args[i]);
}
DoubleCache ann = method.getAnnotation(DoubleCache.class);
String realKey = ann.cacheName() + ":" + ElParser.parse(ann.key(), map);
if (ann.type() == CacheType.PUT) {
Object obj = point.proceed();
redisTemplate.opsForValue().set(realKey, obj, ann.l2TimeOut(), TimeUnit.SECONDS);
cache.put(realKey, obj);
return obj;
}
if (ann.type() == CacheType.DELETE) {
redisTemplate.delete(realKey);
cache.invalidate(realKey);
return point.proceed();
}
Object local = cache.getIfPresent(realKey);
if (local != null) {
log.info("get data from caffeine");
return local;
}
Object remote = redisTemplate.opsForValue().get(realKey);
if (remote != null) {
log.info("get data from redis");
cache.put(realKey, remote);
return remote;
}
log.info("get data from database");
Object result = point.proceed();
if (result != null) {
redisTemplate.opsForValue().set(realKey, result, ann.l2TimeOut(), TimeUnit.SECONDS);
cache.put(realKey, result);
}
return result;
}
}Applying the custom annotation makes service code concise:
@DoubleCache(cacheName = "order", key = "#id", type = CacheType.FULL)
public Order getOrderById(Long id) { return orderMapper.selectOne(new LambdaQueryWrapper
().eq(Order::getId, id)); }
@DoubleCache(cacheName = "order", key = "#order.id", type = CacheType.PUT)
public Order updateOrder(Order order) { orderMapper.updateById(order); return order; }
@DoubleCache(cacheName = "order", key = "#id", type = CacheType.DELETE)
public void deleteOrder(Long id) { orderMapper.deleteById(id); }The article concludes that two‑level caching can improve latency and reduce database load, but it introduces complexity such as consistency, expiration policies, and concurrency handling; therefore, developers should evaluate whether the added local cache is truly needed for their specific workload.
Code Ape Tech Column
Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.