Implementing and Optimizing a High‑Concurrency Flash Sale System with Optimistic Lock, Distributed Rate Limiting, Redis Cache, and Kafka
This article walks through building a Java‑based flash‑sale (秒杀) service, diagnosing overselling issues, and progressively enhancing it with optimistic locking, distributed rate limiting, Redis caching, and asynchronous Kafka processing to achieve higher throughput and data consistency under heavy concurrency.
The author starts from a simple SSM (Spring MVC + MyBatis) flash‑sale prototype where a web controller forwards requests to a Dubbo‑based service that checks stock, decrements inventory, and creates an order. Initial tests with JMeter reveal severe overselling: many more orders are created than available stock.
To fix the problem, an optimistic‑lock update is introduced. The @Override public int createOptimisticOrder(int sid) throws Exception { ... } method first checks stock, then calls saleStockOptimistic(stock) which executes an SQL UPDATE stock SET sale = sale + 1, version = version + 1 WHERE id = #{id} AND version = #{version} . If the update count is zero, a runtime exception is thrown, causing the request to fail fast.
Performance improves, but the service still hits the database heavily. The next step adds a distributed rate‑limiter built on Redis. A RedisLimit bean is configured with @Configuration public class RedisLimitConfig { ... } and the controller method is annotated with @SpringControllerLimit(errorCode = 200) to reject excess traffic early.
Further optimization moves the stock read/write to Redis cache. The service now retrieves count , sale , and version from Redis keys, performs the optimistic‑lock update on the DB, and then atomically increments the Redis counters with redisTemplate.opsForValue().increment(...) . This reduces database load dramatically.
Finally, the order‑creation and stock‑update steps are decoupled using Kafka. After passing rate limiting and Redis stock validation, the request publishes an order message to a Kafka topic; a separate consumer (implemented with Spring Boot) processes the message asynchronously, persisting the order and updating the stock. This makes the API response fast and further improves throughput.
Extensive JMeter results are shown for each stage, demonstrating decreasing DB connections, lower latency, and correct order counts. The article concludes with best‑practice recommendations: upstream request filtering, UID‑based limiting, minimizing DB hits, leveraging caches, converting sync to async, and fail‑fast strategies.
Architecture Digest
Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.