Avoid These 8 Common Pitfalls When Using Redis Distributed Locks

This article examines the most frequent problems encountered with Redis distributed locks—including non‑atomic operations, forgotten releases, accidental unlocking of others, massive request failures, re‑entrancy, lock contention, timeout handling, and master‑slave replication—while offering practical code examples and mitigation strategies.

macrozheng
macrozheng
macrozheng
Avoid These 8 Common Pitfalls When Using Redis Distributed Locks

Preface

Redis distributed locks are popular because they are simple and efficient, but misuse can cause serious issues. This article reviews common pitfalls and provides reference solutions.

1. Non‑Atomic Operations

Typical code uses setNx followed by expire, which are separate commands and therefore non‑atomic. If setting the timeout fails after the lock is acquired, the key may never expire, leading to memory exhaustion.

The atomic alternative is the SET command with NX and PX options:

String result = jedis.set(lockKey, requestId, "NX", "PX", expireTime);
if ("OK".equals(result)) {
    return true;
}
return false;

This single command both acquires the lock and sets its expiration.

2. Forgetting to Release Locks

Using SET atomically solves the previous issue, but the lock still needs to be released. A typical pattern is:

try {
    String result = jedis.set(lockKey, requestId, "NX", "PX", expireTime);
    if ("OK".equals(result)) {
        return true;
    }
    return false;
} finally {
    unlock(lockKey);
}

Releasing the lock in a finally block guarantees execution regardless of success or failure.

3. Releasing Someone Else's Lock

In high‑concurrency scenarios a thread may release a lock that it did not acquire. To prevent this, store a unique requestId when locking and verify it before deletion:

if (jedis.get(lockKey).equals(requestId)) {
    jedis.del(lockKey);
    return true;
}
return false;
Only the owner of a lock may release it.

Lua scripts can perform the check and delete atomically:

if redis.call('get', KEYS[1]) == ARGV[1] then
    return redis.call('del', KEYS[1])
else
    return 0
end

4. Massive Failed Requests

When many clients compete for a single lock, most requests fail. In scenarios like file‑upload directory creation, a spin‑lock with retry and timeout can improve success rates:

try {
    long start = System.currentTimeMillis();
    while (true) {
        String result = jedis.set(lockKey, requestId, "NX", "PX", expireTime);
        if ("OK".equals(result)) {
            if (!exists(path)) {
                mkdir(path);
            }
            return true;
        }
        if (System.currentTimeMillis() - start >= timeout) {
            return false;
        }
        Thread.sleep(50);
    }
} finally {
    unlock(lockKey, requestId);
}

5. Re‑entrant Lock Issue

Recursive methods that acquire the same lock at each level will dead‑lock on the second acquisition. Using Redisson’s re‑entrant lock solves this:

RLock lock = redisson.getLock(lockKey);
lock.lock(5, TimeUnit.SECONDS);
// business logic
lock.unlock();

Redisson implements re‑entrancy with a Lua script that increments a counter when the same requestId re‑acquires the lock.

6. Lock Contention Issues

6.1 Read‑Write Locks

Read‑write locks allow concurrent reads while writes remain exclusive. Example with Redisson:

RReadWriteLock rwLock = redisson.getReadWriteLock("readWriteLock");
RLock rLock = rwLock.readLock();
try {
    rLock.lock();
    // read operation
} finally {
    rLock.unlock();
}
RLock wLock = rwLock.writeLock();
try {
    wLock.lock();
    // write operation
} finally {
    wLock.unlock();
}

6.2 Lock Segmentation

Splitting a large lock into multiple segments (e.g., 100 shards) reduces contention. In a flash‑sale scenario each shard handles a subset of inventory, dramatically lowering lock conflicts.

7. Lock Timeout Issues

If a lock expires while business logic is still running, subsequent code executes without protection. Automatic lock renewal (watch‑dog) solves this:

Timer timer = new Timer();
 timer.schedule(new TimerTask() {
    @Override
    public void run() {
        // refresh expiration
    }
}, 10000, TimeUnit.MILLISECONDS);

Redisson provides a built‑in watch‑dog that periodically extends the TTL as long as the lock holder is alive.

8. Master‑Slave Replication Problems

In a master‑slave setup, if the master crashes after a lock is acquired but before replication, the lock is lost. Redisson’s RedissonRedLock implements the Redlock algorithm across multiple independent Redis instances to mitigate this risk.

if (acquiredNodes >= N/2 + 1) {
    // lock succeeded
} else {
    // lock failed
}
Redlock improves safety at the cost of additional resources and latency.

Choosing between CP (e.g., Zookeeper) and AP (e.g., Redis) solutions depends on whether consistency or availability is more critical for the application.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Luadistributed-lock
macrozheng
Written by

macrozheng

Dedicated to Java tech sharing and dissecting top open-source projects. Topics include Spring Boot, Spring Cloud, Docker, Kubernetes and more. Author’s GitHub project “mall” has 50K+ stars.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.