Mastering Two-Level Cache in Spring Boot: Caffeine + Redis Integration
This article explains how to build a two‑level cache architecture using Caffeine as a local cache and Redis as a remote cache in a Spring Boot project, covering manual implementation, Spring cache annotations, and a custom AOP‑based solution while discussing advantages, consistency challenges, and best‑practice code examples.
In high‑performance service architecture, caching is essential. Remote caches such as Redis or Memcached store hot data and only query the database on a miss.
When remote cache alone is insufficient, a local cache (e.g., Caffeine or Guava) can be added as a first‑level cache, forming a two‑level cache architecture.
Advantages and Issues
Local cache resides in memory, providing extremely fast access for data with low change frequency.
Using local cache reduces network I/O with Redis, lowering latency.
Data consistency must be maintained between the two cache levels and the database; in distributed environments, cache invalidation across nodes is required, which can be solved with Redis pub/sub.
Below is a simple implementation of two‑level cache in a Spring Boot project using Caffeine as the first‑level cache and Redis as the second‑level cache.
Preparation
Add the following Maven dependencies:
<code><dependency>
<groupId>com.github.ben-manes.caffeine</groupId>
<artifactId>caffeine</artifactId>
<version>2.9.2</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-pool2</artifactId>
<version>2.8.1</version>
</dependency></code>Configure Redis connection in
application.yml:
<code>spring:
redis:
host: 127.0.0.1
port: 6379
database: 0
timeout: 10000ms
lettuce:
pool:
max-active: 8
max-wait: -1ms
max-idle: 8
min-idle: 0</code>Use
RedisTemplatefor Redis read/write; configure
ConnectionFactoryand serialization as needed.
V1.0 – Manual Two‑Level Cache
Configure Caffeine:
<code>@Configuration
public class CaffeineConfig {
@Bean
public Cache<String,Object> caffeineCache() {
return Caffeine.newBuilder()
.initialCapacity(128)
.maximumSize(1024)
.expireAfterWrite(60, TimeUnit.SECONDS)
.build();
}
}</code>Explain parameters:
initialCapacity,
maximumSize,
expireAfterWrite, etc.
Implement service methods that first check the Caffeine cache, then Redis, and finally the database, updating caches accordingly. Example for fetching an order:
<code>public Order getOrderById(Long id) {
String key = CacheConstant.ORDER + id;
Order order = (Order) cache.get(key, k -> {
Object obj = redisTemplate.opsForValue().get(k);
if (Objects.nonNull(obj)) {
log.info("get data from redis");
return obj;
}
log.info("get data from database");
Order myOrder = orderMapper.selectOne(new LambdaQueryWrapper<Order>().eq(Order::getId, id));
redisTemplate.opsForValue().set(k, myOrder, 120, TimeUnit.SECONDS);
return myOrder;
});
return order;
}</code>Update and delete methods similarly manipulate both caches. The first call populates Redis (120 s) and Caffeine (60 s); subsequent calls within 60 s hit Caffeine, calls after 60 s hit Redis, and after both expire the database is queried again.
Test results show the cache behavior:
V2.0 – Spring Cache Annotations
Leverage Spring’s
CacheManagerand annotations
@Cacheable,
@CachePut,
@CacheEvictto reduce manual cache handling. Configure a
CaffeineCacheManagerbean:
<code>@Configuration
public class CacheManagerConfig {
@Bean
public CacheManager cacheManager() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager();
cacheManager.setCaffeine(Caffeine.newBuilder()
.initialCapacity(128)
.maximumSize(1024)
.expireAfterWrite(60, TimeUnit.SECONDS));
return cacheManager;
}
}</code>Enable caching with
@EnableCachingon the application class. Annotate service methods, e.g.:
<code>@Cacheable(value = "order", key = "#id")
public Order getOrderById(Long id) { ... }</code>The
value/
keyattributes determine which cache stores the result; Spring handles the underlying Caffeine and Redis interactions.
V3.0 – Custom Annotation with AOP
Define a custom
@DoubleCacheannotation that supports a cache name, a Spring‑EL key, a second‑level timeout, and an operation type (FULL, PUT, DELETE):
<code>@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface DoubleCache {
String cacheName();
String key(); // supports SpringEL
long l2TimeOut() default 120;
CacheType type() default CacheType.FULL;
}</code>Utility method to parse the EL expression:
<code>public static String parse(String elString, TreeMap<String,Object> map) {
elString = String.format("#{%s}", elString);
ExpressionParser parser = new SpelExpressionParser();
EvaluationContext context = new StandardEvaluationContext();
map.forEach((k,v) -> context.setVariable(k, v));
Expression expression = parser.parseExpression(elString, new TemplateParserContext());
return expression.getValue(context, String.class);
}</code>Aspect that intercepts methods annotated with
@DoubleCache, builds the real cache key, and performs the appropriate action (read from Caffeine, fall back to Redis, write to both, or delete):
<code>@Component
@Aspect
@AllArgsConstructor
public class CacheAspect {
private final Cache cache;
private final RedisTemplate redisTemplate;
@Pointcut("@annotation(com.cn.dc.annotation.DoubleCache)")
public void cacheAspect() {}
@Around("cacheAspect()")
public Object doAround(ProceedingJoinPoint point) throws Throwable {
MethodSignature signature = (MethodSignature) point.getSignature();
Method method = signature.getMethod();
String[] paramNames = signature.getParameterNames();
Object[] args = point.getArgs();
TreeMap<String, Object> map = new TreeMap<>();
for (int i = 0; i < paramNames.length; i++) {
map.put(paramNames[i], args[i]);
}
DoubleCache annotation = method.getAnnotation(DoubleCache.class);
String elResult = ElParser.parse(annotation.key(), map);
String realKey = annotation.cacheName() + ":" + elResult;
if (annotation.type() == CacheType.PUT) {
Object obj = point.proceed();
redisTemplate.opsForValue().set(realKey, obj, annotation.l2TimeOut(), TimeUnit.SECONDS);
cache.put(realKey, obj);
return obj;
}
if (annotation.type() == CacheType.DELETE) {
redisTemplate.delete(realKey);
cache.invalidate(realKey);
return point.proceed();
}
Object caffeineCache = cache.getIfPresent(realKey);
if (Objects.nonNull(caffeineCache)) {
log.info("get data from caffeine");
return caffeineCache;
}
Object redisCache = redisTemplate.opsForValue().get(realKey);
if (Objects.nonNull(redisCache)) {
log.info("get data from redis");
cache.put(realKey, redisCache);
return redisCache;
}
log.info("get data from database");
Object obj = point.proceed();
if (Objects.nonNull(obj)) {
redisTemplate.opsForValue().set(realKey, obj, annotation.l2TimeOut(), TimeUnit.SECONDS);
cache.put(realKey, obj);
}
return obj;
}
}</code>Apply the annotation to service methods, leaving only business logic inside:
<code>@DoubleCache(cacheName = "order", key = "#id", type = CacheType.FULL)
public Order getOrderById(Long id) {
return orderMapper.selectOne(new LambdaQueryWrapper<Order>().eq(Order::getId, id));
}
@DoubleCache(cacheName = "order", key = "#order.id", type = CacheType.PUT)
public Order updateOrder(Order order) {
orderMapper.updateById(order);
return order;
}
@DoubleCache(cacheName = "order", key = "#id", type = CacheType.DELETE)
public void deleteOrder(Long id) {
orderMapper.deleteById(id);
}</code>Summary
Three approaches are presented, from fully manual to Spring‑annotation‑driven to a custom AOP solution, each reducing the intrusion of cache code into business logic. Whether to adopt a two‑level cache depends on the specific workload; remote cache alone may be sufficient, and additional concerns such as concurrency, transaction rollback, and data selection for each cache level must be considered.
macrozheng
Dedicated to Java tech sharing and dissecting top open-source projects. Topics include Spring Boot, Spring Cloud, Docker, Kubernetes and more. Author’s GitHub project “mall” has 50K+ stars.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.