Beyond Simple Caching: 8 Essential Redis Use Cases for Java Backend Engineers
This guide walks Java backend developers through Redis’s eight core scenarios—caching, distributed locks, rate limiting, session sharing, leaderboards, counters, message and delay queues, bitmap statistics, and geolocation—providing complete code, diagrams, and production‑grade best practices.
In modern Java backend projects, Redis is more than a high‑performance in‑memory key‑value store; it acts as a versatile "Swiss‑army knife" that can handle caching, distributed locking, rate limiting, messaging, leaderboards, session sharing, counters, bitmap statistics, and geolocation.
Redis’s Position in a Spring Boot Microservice Architecture
Redis sits between the application layer and the database, accelerating data access and decoupling services. Its core strengths are speed (single‑threaded I/O multiplexing achieving >100k QPS), rich data structures (String, Hash, List, Set, ZSet, Stream, BitMap, HyperLogLog, GEO), and production‑grade durability (RDB/AOF, replication, Sentinel, Cluster).
Environment Setup and Dependencies
Add the following Maven dependencies to enable Spring Data Redis, a connection pool, and Redisson for advanced features:
<dependencies>
<!-- Spring Data Redis (Lettuce) -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<!-- Connection pool -->
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-pool2</artifactId>
</dependency>
<!-- Redisson for distributed locks, etc. -->
<dependency>
<groupId>org.redisson</groupId>
<artifactId>redisson-spring-boot-starter</artifactId>
<version>3.27.2</version>
</dependency>
<!-- JSON serialization -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
</dependency>
</dependencies>Configure Redis in application.yml with host, port, password, database, timeout, and Lettuce pool settings.
Scenario 1 – Cache
Use the classic Cache‑Aside pattern. The article provides a flow diagram and two implementations:
Manual cache : a ProductService checks the cache, falls back to the database, writes back with a TTL, and evicts on update.
public Product getById(Long id) {
String key = CACHE_KEY + id;
Product cached = (Product) redisTemplate.opsForValue().get(key);
if (cached != null) return cached;
Product product = productMapper.selectById(id);
if (product != null) {
redisTemplate.opsForValue().set(key, product, CACHE_TTL, TimeUnit.MINUTES);
}
return product;
}Spring Cache annotations : configure a CacheManager with JSON serialization and a 10‑minute TTL, then use @Cacheable, @CachePut, and @CacheEvict on service methods.
@Cacheable(value = "product", key = "#id", unless = "#result == null")
public Product getById(Long id) { return productMapper.selectById(id); }To mitigate cache‑related problems, the guide details solutions for cache penetration (cache empty values + Bloom filter), cache breakdown (mutex lock + logical expiration), and cache avalanche (randomized TTL + multi‑level cache).
Scenario 2 – Distributed Lock
In a clustered environment, JVM‑level locks ( synchronized, ReentrantLock) cannot guarantee cross‑process mutual exclusion, so Redis is used.
SETNX lock : a simple lock implemented with SET key value NX EX seconds. The lock value is a UUID to avoid accidental unlocks.
public String tryLock(String key, long expireSeconds) {
String value = UUID.randomUUID().toString();
Boolean ok = stringRedisTemplate.opsForValue()
.setIfAbsent(key, value, expireSeconds, TimeUnit.SECONDS);
return Boolean.TRUE.equals(ok) ? value : null;
}Redisson lock : production‑ready with automatic lease renewal, re‑entrancy, fairness, and RedLock support.
RLock lock = redissonClient.getLock("lock:stock:" + productId);
if (!lock.tryLock(3, 30, TimeUnit.SECONDS)) {
throw new RuntimeException("抢购火爆,请稍后再试");
}
// critical sectionScenario 3 – Distributed Rate Limiting
For flash‑sale, open‑API, or anti‑scraping scenarios, Redis + Lua provides an efficient sliding‑window limiter. rate_limit.lua implements the algorithm:
-- KEYS[1]: limit key
-- ARGV[1]: window (ms)
-- ARGV[2]: max count
-- ARGV[3]: current timestamp (ms)
local key = KEYS[1]
local window = tonumber(ARGV[1])
local max = tonumber(ARGV[2])
local now = tonumber(ARGV[3])
redis.call('ZREMRANGEBYSCORE', key, 0, now - window)
local cur = redis.call('ZCARD', key)
if cur < max then
redis.call('ZADD', key, now, now)
redis.call('PEXPIRE', key, window)
return 1
else
return 0
endJava wrapper RateLimiter loads the script and invokes it via StringRedisTemplate.execute . An annotation @RateLimit together with an AOP aspect intercepts controller methods, applying the limiter and throwing an exception on overload.
Scenario 4 – Distributed Session Sharing
Replace Tomcat’s single‑node session with Spring Session + Redis . Add the dependency spring-session-data-redis and configure:
spring:
session:
store-type: redis
timeout: 30m
redis:
namespace: spring:sessionEnable with @EnableRedisHttpSession(maxInactiveIntervalInSeconds = 1800) . All HttpSession data is automatically persisted in Redis, allowing multiple instances to share authentication state.
Scenario 5 – Leaderboard & Counter
Use ZSet (sorted set) for real‑time ranking. The service increments scores with incrementScore and retrieves top N with reverseRangeWithScores . Individual rank is obtained via reverseRank .
public void addScore(String userId, double score) {
redisTemplate.opsForZSet().incrementScore(RANK_KEY, userId, score);
}
public List<RankItem> top(int n) { ... }
public Long myRank(String userId) { ... }For simple counters (article views, likes), INCR or a daily Hash is used:
public long incrView(Long articleId) {
String key = "article:view:" + articleId;
return redisTemplate.opsForValue().increment(key);
}
public long incrViewWithExpire(Long articleId) {
String key = "article:view:daily:" + LocalDate.now();
Long count = redisTemplate.opsForHash().increment(key, articleId.toString(), 1);
redisTemplate.expire(key, 2, TimeUnit.DAYS);
return count;
}Scenario 6 – Message Queue & Delay Queue
Redis 5.0+ introduces Stream , a lightweight MQ supporting consumer groups, ACK, and persistence.
Producer adds a map to stream:order via opsForStream().add. Consumer creates a consumer group, polls with StreamMessageListenerContainer , processes messages, and acknowledges them.
For delayed tasks, a ZSet stores order IDs with a future timestamp as the score. A scheduled job scans due items (score ≤ now) and processes them.
public void push(String orderId, long delayMs) {
long trigger = System.currentTimeMillis() + delayMs;
redisTemplate.opsForZSet().add(QUEUE_KEY, orderId, trigger);
}
@Scheduled(fixedDelay = 1000)
public void scan() {
long now = System.currentTimeMillis();
Set<String> due = redisTemplate.opsForZSet()
.rangeByScore(QUEUE_KEY, 0, now, 0, 50);
for (String id : due) {
if (redisTemplate.opsForZSet().remove(QUEUE_KEY, id) > 0) {
log.info("关闭超时订单: {}", id);
}
}
}Scenario 7 – Bitmap Statistics
BitMap enables ultra‑compact storage of boolean states (e.g., daily sign‑in). Setting a bit marks a user’s activity; BITCOUNT aggregates the count. The article notes that 10 million users require only ~30 MB for a month’s sign‑in bitmap.
Scenario 8 – GEO Location
Redis’s GEO type supports “nearby people” or “nearby shops”. Add locations with GEOADD and query with GEOSEARCH (or radius via Spring Data). The service returns a list of NearbyShop objects with distance.
public void addShop(String shopId, double lng, double lat) {
redisTemplate.opsForGeo().add(KEY, new Point(lng, lat), shopId);
}
public List<NearbyShop> nearby(double lng, double lat, double radiusKm, int limit) {
Circle circle = new Circle(new Point(lng, lat), new Distance(radiusKm, Metrics.KILOMETERS));
GeoRadiusCommandArgs args = GeoRadiusCommandArgs.newGeoRadiusArgs()
.includeCoordinates().includeDistance().sortAscending().limit(limit);
GeoResults<GeoLocation<String>> results = redisTemplate.opsForGeo().radius(KEY, circle, args);
// map results to NearbyShop list
}Production‑Grade Best Practices
Key naming : use business:entity:id (e.g., user:profile:1001) for clarity and monitoring.
TTL : set expiration on all non‑configuration keys to prevent unbounded memory growth.
Avoid large keys : keep single String < 10 KB; limit Hash/List/ZSet elements to < 5 000 and use HSCAN/SCAN for batch processing.
Do not use KEYS/FLUSHALL in production ; replace with SCAN for safe iteration.
Pipeline writes : batch operations with redisTemplate.executePipelined to reduce RTT.
Cache‑DB consistency : update the database first, then delete the cache; for strong consistency, use binlog‑based async compensation (e.g., Canal).
Monitoring : watch used_memory, connected_clients, instantaneous_ops_per_sec, and slow‑query logs; integrate with Prometheus + Grafana.
High availability : use Redis Sentinel or Cluster in production; single‑node is for development only.
By mastering these eight scenarios and the accompanying best practices, Java developers can confidently address the majority of high‑concurrency challenges with Redis as the first performance‑optimization layer.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
java1234
Former senior programmer at a Fortune Global 500 company, dedicated to sharing Java expertise. Visit Feng's site: Java Knowledge Sharing, www.java1234.com
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
