Why Simple synchronized Locks Fail in Distributed Systems and How Redisson Fixes Them

This article examines common pitfalls of using single‑machine synchronization and basic SETNX locks for high‑concurrency stock‑deduction scenarios, demonstrates step‑by‑step improvements—including lock expiration and Redisson’s Lua‑based implementation—and discusses trade‑offs between Redis and Zookeeper for distributed locking.

Architect's Must-Have
Architect's Must-Have
Architect's Must-Have
Why Simple synchronized Locks Fail in Distributed Systems and How Redisson Fixes Them

Distributed Lock Scenarios

Internet flash sale

Coupon grabbing

API idempotency verification

Case 1 – Simulated stock deduction

Code shows a simple SpringBoot controller that reads the stock value from Redis, decrements it, and writes it back. With five concurrent clients the final stock becomes 99 instead of the expected 95 because each request reads the same initial value.

package com.wangcp.redisson;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.redis.core.StringRedisTemplate;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class IndexController {

    @Autowired
    private StringRedisTemplate stringRedisTemplate;

    /**
     * Simulate order deduction scenario
     */
    @RequestMapping(value = "/duduct_stock")
    public String deductStock(){
        // Get current stock from Redis
        int stock = Integer.parseInt(stringRedisTemplate.opsForValue().get("stock"));
        if(stock > 0){
            int realStock = stock - 1;
            stringRedisTemplate.opsForValue().set("stock", realStock + "");
            System.out.println("扣减成功,剩余库存:" + realStock);
        } else {
            System.out.println("扣减失败,库存不足");
        }
        return "end";
    }
}

Assuming the initial Redis stock is 100, five simultaneous requests cause the race condition described above.

Case 2 – Using synchronized for a single‑machine lock

Adding a synchronized block ensures only one thread can execute the critical section on a single JVM, but in a clustered deployment each instance has its own JVM, so the lock does not protect shared Redis data.

@RequestMapping(value = "/duduct_stock")
public String deductStock(){
    synchronized (this){
        // Get current stock from Redis
        int stock = Integer.parseInt(stringRedisTemplate.opsForValue().get("stock"));
        if(stock > 0){
            int realStock = stock - 1;
            stringRedisTemplate.opsForValue().set("stock", realStock + "");
            System.out.println("扣减成功,剩余库存:" + realStock);
        } else {
            System.out.println("扣减失败,库存不足");
        }
    }
    return "end";
}

Only one request can enter the method at a time, but the lock is ineffective across multiple service instances.

Case 3 – Using SETNX to implement a distributed lock

SETNX sets a key only if it does not already exist, providing a basic distributed lock. The example checks the result, performs the stock deduction, and finally deletes the lock key.

@RequestMapping(value = "/duduct_stock")
public String deductStock(){
    String lockKey = "product_001";
    // Try to acquire lock
    Boolean result = stringRedisTemplate.opsForValue().setIfAbsent(lockKey, "wangcp");
    if(!result){
        return "error_code";
    }
    // Business logic
    int stock = Integer.parseInt(stringRedisTemplate.opsForValue().get("stock"));
    if(stock > 0){
        int realStock = stock - 1;
        stringRedisTemplate.opsForValue().set("stock", realStock + "");
        System.out.println("扣减成功,剩余库存:" + realStock);
    } else {
        System.out.println("扣减失败,库存不足");
    }
    // Release lock
    stringRedisTemplate.delete(lockKey);
    return "end";
}

Problems: if the process crashes after acquiring the lock, the lock is never released (deadlock); also, a later request may release a lock that belongs to another request.

Case 4 – Adding expiration time to the lock

Adding a TTL prevents permanent deadlock, but a fixed timeout can cause premature release or lock takeover when the business logic exceeds the TTL. The article illustrates three concurrent requests with a 10 s timeout, showing how the lock can be released by the wrong request.

Case 5 – Redisson distributed lock

Redisson provides a high‑level API that internally uses Lua scripts for atomic lock acquisition and automatic lease renewal.

Dependency

<dependency>
    <groupId>org.redisson</groupId>
    <artifactId>redisson</artifactId>
    <version>3.6.5</version>
</dependency>

Client initialization

@Bean
public RedissonClient redisson(){
    // Single server mode
    Config config = new Config();
    config.useSingleServer().setAddress("redis://192.168.3.170:6379").setDatabase(0);
    return Redisson.create(config);
}

Lock usage

@RestController
public class IndexController {

    @Autowired
    private RedissonClient redisson;

    @Autowired
    private StringRedisTemplate stringRedisTemplate;

    @RequestMapping(value = "/duduct_stock")
    public String deductStock(){
        String lockKey = "product_001";
        // 1. Get lock object
        RLock redissonLock = redisson.getLock(lockKey);
        try{
            // 2. Acquire lock (equivalent to setIfAbsent with TTL)
            redissonLock.lock();
            int stock = Integer.parseInt(stringRedisTemplate.opsForValue().get("stock"));
            if(stock > 0){
                int realStock = stock - 1;
                stringRedisTemplate.opsForValue().set("stock", realStock + "");
                System.out.println("扣减成功,剩余库存:" + realStock);
            } else {
                System.out.println("扣减失败,库存不足");
            }
        } finally {
            // 3. Release lock
            redissonLock.unlock();
        }
        return "end";
    }
}

Redisson’s lock() method executes a Lua script that atomically creates the lock key, sets a lease time, and increments a re‑entrancy counter.

private void scheduleExpirationRenewal(final long threadId){
    if(expirationRenewalMap.containsKey(getEntryName())){
        return;
    }
    Timeout task = commandExecutor.getConnectionManager().newTimeout(new TimerTask(){
        @Override
        public void run(Timeout timeout) throws Exception{
            RFuture<Boolean> future = commandExecutor.evalWriteAsync(getName(), LongCodec.INSTANCE, RedisCommands.EVAL_BOOLEAN,
                "if (redis.call('hexists', KEYS[1], ARGV[2]) == 1) then " +
                "redis.call('pexpire', KEYS[1], ARGV[1]); " +
                "return 1; " +
                "end; " +
                "return 0;",
                Collections.<Object>singletonList(getName()), internalLockLeaseTime, getLockName(threadId));
            future.addListener(new FutureListener<Boolean>(){
                @Override
                public void operationComplete(Future<Boolean> future) throws Exception{
                    expirationRenewalMap.remove(getEntryName());
                    if(!future.isSuccess()){
                        log.error("Can't update lock " + getName() + " expiration", future.cause());
                        return;
                    }
                    if(future.getNow()){
                        scheduleExpirationRenewal(threadId);
                    }
                }
            });
        }
    }, internalLockLeaseTime / 3, TimeUnit.MILLISECONDS);
    if(expirationRenewalMap.putIfAbsent(getEntryName(), task) != null){
        task.cancel();
    }
}

The renewal task runs periodically (default lease 30 s, renewal every 10 s) to extend the lock while it is held, preventing accidental expiration.

Further considerations

In a Redis cluster, if the master fails before replicating the lock key to slaves, the lock may be lost.

Distributed locks serialize concurrent requests, which can reduce throughput; lock sharding or segmenting (e.g., using multiple lock keys for different stock ranges) can improve parallelism.

Choosing between Redis and Zookeeper depends on the required consistency vs. performance trade‑off: Zookeeper offers stronger consistency (CP) but lower concurrency, while Redis provides higher QPS with occasional lock loss tolerable in many business scenarios.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Redisredissonspring-bootdistributed-lock
Architect's Must-Have
Written by

Architect's Must-Have

Professional architects sharing high‑quality architecture insights. Covers high‑availability, high‑performance, high‑stability designs, big data, machine learning, Java, system, distributed and AI architectures, plus internet‑driven architectural adjustments and large‑scale practice. Open to idea‑driven, sharing architects for exchange and learning.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.