Backend Development 11 min read

Analysis and Solutions for Redis Distributed Lock Over‑sell Incident in High‑Concurrency Seckill

This article examines a real-world over‑sell incident caused by an unsafe Redis distributed lock in a high‑traffic seckill service, analyzes the root causes such as lock expiration and non‑atomic stock checks, and presents safer lock implementations, atomic stock operations, and refactored code to prevent future overselling.

Top Architect
Top Architect
Top Architect
Analysis and Solutions for Redis Distributed Lock Over‑sell Incident in High‑Concurrency Seckill

In modern systems, using Redis for distributed locking is common, but this article details a severe over‑sell incident that occurred during a flash‑sale of a scarce product ("Flying Maotai") where 100 bottles were sold out but the system sold more than the available stock.

The root cause was a combination of factors: the user‑service became a bottleneck under heavy load, causing request latency to exceed the 10‑second lock expiration; the lock then expired while the business logic was still running, allowing other threads to acquire the same lock. Additionally, the stock verification was performed with a non‑atomic get‑and‑compare approach, leading to race conditions.

To address these issues, the article proposes a safer distributed lock implementation that ties the lock’s release to a unique value using a Lua script, ensuring that only the owner can delete the lock:

public void safedUnLock(String key, String val) {
    String luaScript = "local in = ARGV[1] local curr=redis.call('get', KEYS[1]) if in==curr then redis.call('del', KEYS[1]) end return 'OK'";
    RedisScript
redisScript = RedisScript.of(luaScript);
    redisTemplate.execute(redisScript, Collections.singletonList(key), Collections.singleton(val));
}

For stock verification, the article recommends leveraging Redis’s atomic increment operation instead of a separate get‑and‑compare step:

Long currStock = redisTemplate.opsForHash().increment("key", "stock", -1);

The refactored seckill handling code incorporates the safe lock and atomic stock decrement, generating orders only when stock remains and releasing the lock via the Lua‑based method:

public SeckillActivityRequestVO seckillHandle(SeckillActivityRequestVO request) {
    SeckillActivityRequestVO response;
    String key = "key:" + request.getSeckillId();
    String val = UUID.randomUUID().toString();
    try {
        Boolean lockFlag = distributedLocker.lock(key, val, 10, TimeUnit.SECONDS);
        if (!lockFlag) {
            // business exception
        }
        // user validation omitted for brevity
        Long currStock = stringRedisTemplate.opsForHash().increment(key + ":info", "stock", -1);
        if (currStock < 0) {
            log.error("[Seckill] No stock");
            // business exception
        } else {
            // generate order, publish event, build response
        }
    } finally {
        distributedLocker.safedUnLock(key, val);
    }
    return response;
}

Beyond code changes, the article discusses whether a distributed lock is necessary at all, noting that Redis’s atomic operations can often replace the lock, but a lock can still help throttle traffic to downstream services. It also compares the simple lock with RedLock, highlighting trade‑offs between reliability and performance.

Finally, the author reflects on the importance of thorough design, continuous learning, and the potential for further optimizations such as sharding stock across servers and using in‑memory structures for ultra‑low latency.

JavaPerformanceRedishigh concurrencyDistributed LockAtomicityseckill
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.