Two-Level Cache in Spring Boot: Boost Performance with Caffeine & Redis
Learn how to implement a two‑level caching architecture in Spring Boot using Caffeine as a local cache and Redis as a remote cache, covering manual implementations, annotation‑driven approaches with @Cacheable/@CachePut/@CacheEvict, and a custom @DoubleCache annotation to minimize code intrusion while improving response times.
In high‑performance service architectures, caching is essential. Remote caches such as
Redisor
MemCachestore hot data and only query the database when a cache miss occurs, reducing latency and database load.
When remote caches alone are insufficient, a local cache (e.g.,
Guavaor
Caffeine) can be added as a first‑level cache, forming a two‑level cache architecture that further improves response speed.
Advantages and Issues
Local cache resides in JVM memory, providing extremely fast access for data with low change frequency.
Using a local cache reduces network I/O between the application and remote cache (
Redis), decreasing latency.
Data consistency must be handled: updates to the database must also refresh or invalidate both local and remote caches.
In distributed environments, local caches on different nodes need synchronization, often via Redis pub/sub.
Preparation
Add the required dependencies for Caffeine, Spring Boot Redis, and connection pooling:
<code><dependency>
<groupId>com.github.ben-manes.caffeine</groupId>
<artifactId>caffeine</artifactId>
<version>2.9.2</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-pool2</artifactId>
<version>2.8.1</version>
</dependency></code>Configure Redis connection in
application.yml:
<code>spring:
redis:
host: 127.0.0.1
port: 6379
database: 0
timeout: 10000ms
lettuce:
pool:
max-active: 8
max-wait: -1ms
max-idle: 8
min-idle: 0</code>V1.0 – Manual Two‑Level Cache
Define a Caffeine bean:
<code>@Configuration
public class CaffeineConfig {
@Bean
public Cache<String, Object> caffeineCache() {
return Caffeine.newBuilder()
.initialCapacity(128)
.maximumSize(1024)
.expireAfterWrite(60, TimeUnit.SECONDS)
.build();
}
}</code>Typical service method without caching:
<code>@Service
@AllArgsConstructor
public class OrderServiceImpl implements OrderService {
private final OrderMapper orderMapper;
@Override
public Order getOrderById(Long id) {
return orderMapper.selectOne(new LambdaQueryWrapper<Order>().eq(Order::getId, id));
}
@Override
public void updateOrder(Order order) {
orderMapper.updateById(order);
}
@Override
public void deleteOrder(Long id) {
orderMapper.deleteById(id);
}
}</code>Wrap the method with two‑level cache logic:
<code>public Order getOrderById(Long id) {
String key = CacheConstant.ORDER + id;
Order order = (Order) cache.get(key, k -> {
// 1. Try Redis
Object obj = redisTemplate.opsForValue().get(k);
if (obj != null) {
log.info("get data from redis");
return obj;
}
// 2. Fallback to DB
log.info("get data from database");
Order dbOrder = orderMapper.selectOne(new LambdaQueryWrapper<Order>().eq(Order::getId, id));
redisTemplate.opsForValue().set(k, dbOrder, 120, TimeUnit.SECONDS);
return dbOrder;
});
return order;
}</code>Update and delete operations manually synchronize both caches:
<code>public void updateOrder(Order order) {
log.info("update order data");
String key = CacheConstant.ORDER + order.getId();
orderMapper.updateById(order);
redisTemplate.opsForValue().set(key, order, 120, TimeUnit.SECONDS);
cache.put(key, order);
}
public void deleteOrder(Long id) {
log.info("delete order");
orderMapper.deleteById(id);
String key = CacheConstant.ORDER + id;
redisTemplate.delete(key);
cache.invalidate(key);
}</code>V2.0 – Spring Cache Annotations
Configure a
CaffeineCacheManagerbean:
<code>@Configuration
public class CacheManagerConfig {
@Bean
public CacheManager cacheManager() {
CaffeineCacheManager manager = new CaffeineCacheManager();
manager.setCaffeine(Caffeine.newBuilder()
.initialCapacity(128)
.maximumSize(1024)
.expireAfterWrite(60, TimeUnit.SECONDS));
return manager;
}
}</code>Enable Spring caching in the main application class:
<code>@SpringBootApplication
@EnableCaching
public class MallApplication { ... }</code>Apply annotations to service methods:
<code>@Cacheable(value = "order", key = "#id")
public Order getOrderById(Long id) {
// only DB logic; Redis handling omitted for brevity
return orderMapper.selectOne(new LambdaQueryWrapper<Order>().eq(Order::getId, id));
}
@CachePut(cacheNames = "order", key = "#order.id")
public Order updateOrder(Order order) {
orderMapper.updateById(order);
redisTemplate.opsForValue().set(CacheConstant.ORDER + order.getId(), order, 120, TimeUnit.SECONDS);
return order;
}
@CacheEvict(cacheNames = "order", key = "#id")
public void deleteOrder(Long id) {
orderMapper.deleteById(id);
redisTemplate.delete(CacheConstant.ORDER + id);
}</code>With these annotations, Spring automatically handles the local Caffeine cache, while the Redis update is still performed manually.
V3.0 – Custom @DoubleCache Annotation
Define the annotation:
<code>@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface DoubleCache {
String cacheName();
String key(); // supports SpringEL
long l2TimeOut() default 120; // Redis expiration
CacheType type() default CacheType.FULL; // FULL, PUT, DELETE
}</code>Enum for cache operation type:
<code>public enum CacheType {
FULL, // read‑write
PUT, // write only
DELETE // delete only
}</code>Utility to parse SpringEL expressions:
<code>public static String parse(String elString, TreeMap<String, Object> map) {
elString = String.format("#{%s}", elString);
ExpressionParser parser = new SpelExpressionParser();
EvaluationContext context = new StandardEvaluationContext();
map.forEach(context::setVariable);
Expression expression = parser.parseExpression(elString, new TemplateParserContext());
return expression.getValue(context, String.class);
}</code>Aspect that implements the caching logic:
<code>@Component
@Aspect
@AllArgsConstructor
public class CacheAspect {
private final Cache cache; // Caffeine
private final RedisTemplate redisTemplate;
@Pointcut("@annotation(com.cn.dc.annotation.DoubleCache)")
public void cacheAspect() {}
@Around("cacheAspect()")
public Object doAround(ProceedingJoinPoint point) throws Throwable {
MethodSignature signature = (MethodSignature) point.getSignature();
Method method = signature.getMethod();
String[] paramNames = signature.getParameterNames();
Object[] args = point.getArgs();
TreeMap<String, Object> map = new TreeMap<>();
for (int i = 0; i < paramNames.length; i++) {
map.put(paramNames[i], args[i]);
}
DoubleCache ann = method.getAnnotation(DoubleCache.class);
String realKey = ann.cacheName() + ":" + ElParser.parse(ann.key(), map);
// PUT only
if (ann.type() == CacheType.PUT) {
Object result = point.proceed();
redisTemplate.opsForValue().set(realKey, result, ann.l2TimeOut(), TimeUnit.SECONDS);
cache.put(realKey, result);
return result;
}
// DELETE only
if (ann.type() == CacheType.DELETE) {
redisTemplate.delete(realKey);
cache.invalidate(realKey);
return point.proceed();
}
// FULL (read‑write)
Object local = cache.getIfPresent(realKey);
if (local != null) {
log.info("get data from caffeine");
return local;
}
Object remote = redisTemplate.opsForValue().get(realKey);
if (remote != null) {
log.info("get data from redis");
cache.put(realKey, remote);
return remote;
}
log.info("get data from database");
Object result = point.proceed();
if (result != null) {
redisTemplate.opsForValue().set(realKey, result, ann.l2TimeOut(), TimeUnit.SECONDS);
cache.put(realKey, result);
}
return result;
}
}</code>Apply the custom annotation to service methods, eliminating manual cache code:
<code>@DoubleCache(cacheName = "order", key = "#id", type = CacheType.FULL)
public Order getOrderById(Long id) {
return orderMapper.selectOne(new LambdaQueryWrapper<Order>().eq(Order::getId, id));
}
@DoubleCache(cacheName = "order", key = "#order.id", type = CacheType.PUT)
public Order updateOrder(Order order) {
orderMapper.updateById(order);
return order;
}
@DoubleCache(cacheName = "order", key = "#id", type = CacheType.DELETE)
public void deleteOrder(Long id) {
orderMapper.deleteById(id);
}</code>Summary
The article demonstrates three progressively less intrusive ways to manage a two‑level cache in a Spring Boot application: a fully manual approach, Spring’s built‑in cache annotations, and a custom
@DoubleCacheannotation powered by an AOP aspect. Choosing the right method depends on project complexity, consistency requirements, and the desired balance between control and code cleanliness.
macrozheng
Dedicated to Java tech sharing and dissecting top open-source projects. Topics include Spring Boot, Spring Cloud, Docker, Kubernetes and more. Author’s GitHub project “mall” has 50K+ stars.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.