Master Redis Distributed Locks: Prevent Race Conditions, Zombie Locks, and Expiration Issues
This guide explains how Redis implements distributed locks, outlines common pitfalls such as lock contention, zombie locks, and mismatched expiration times, and provides step‑by‑step solutions—including single‑node SET commands, Redlock high‑availability algorithm, Lua‑based safe release, and best‑practice recommendations for real‑world deployments.
Why Redis Distributed Locks Matter
In a distributed system, Redis acts as a shared data store that multiple nodes use to coordinate access to critical resources. Without a consistent locking protocol, nodes can race for the same lock, create "zombie" locks that never release, or suffer from lock expiration that interrupts ongoing tasks.
Typical Problems
Lock contention : Two nodes read the lock as free, then both acquire it, causing concurrent writes.
Zombie lock : A node acquires a lock without setting an expiration and then crashes, leaving the lock forever.
Expiration mismatch : A lock expires before the task finishes, allowing another node to acquire it while the original node is still working.
Single‑node reliability : Deploying Redis on a single instance creates a single point of failure for the lock service.
Single‑Node Lock Mechanism
The core command is an atomic SET lock_key unique_value NX EX 30 where: unique_value uniquely identifies the requester. NX ensures the key is set only if it does not already exist, preventing contention. EX 30 sets a 30‑second expiration to avoid zombie locks.
If the command succeeds, the lock is acquired with an expiration; otherwise the lock is unavailable.
High‑Availability with Redlock
To eliminate the single‑node weakness, the Redlock algorithm uses multiple independent Redis instances (typically 5). A lock is considered acquired only if a majority (⌈N/2⌉) of nodes grant it.
Send SET lock_key unique_value NX EX ttl to each Redis node.
Count successful responses; if the count > N/2, the lock is acquired.
If the count ≤ N/2, immediately release any partial locks to avoid zombie locks.
This approach tolerates failures of up to half the nodes while still providing mutual exclusion.
Safe Unlock with Lua
Unlocking must verify ownership before deletion to avoid removing another node’s lock. A Lua script can perform the check‑and‑delete atomically:
if redis.call("GET", KEYS[1]) == ARGV[1] then
return redis.call("DEL", KEYS[1])
else
return 0
endThe script returns 1 if the lock is removed, 0 otherwise.
Practical Deployment Guidelines
Unique identifier : Use a globally unique UUID for each lock request.
Expiration buffer : Set the TTL longer than the expected task duration (e.g., task 10 s → TTL 30 s).
Retry policy : On nil response (lock already held), retry a limited number of times with short back‑off, or fail fast.
Watch‑dog renewal : For long‑running tasks, enable a background thread that periodically extends the lock TTL (e.g., every 10 s).
Node selection for Redlock : Deploy an odd number of independent nodes (3, 5, 7), avoid master‑slave replication, and place them in separate data centers.
Lock granularity : Choose a lock scope that balances contention and overhead—neither a single global lock nor an excessively fine‑grained lock per data item.
Common Pitfalls
Assuming the watch‑dog can recover a crashed node—if the holder crashes, the watch‑dog stops, and the lock will still expire.
Overusing Redlock for low‑risk operations, which adds unnecessary latency.
Setting lock TTL too short, causing premature expiration while the task is still running.
By following these principles—using atomic SET with NX and EX, applying Redlock for critical paths, employing Lua for safe release, and tuning TTL and granularity—you can build a reliable Redis‑based distributed lock system that ensures mutual exclusion, avoids deadlocks, and provides high availability.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
NiuNiu MaTe
Joined Tencent (nicknamed "Goose Factory") through campus recruitment at a second‑tier university. Career path: Tencent → foreign firm → ByteDance → Tencent. Started as an interviewer at the foreign firm and hopes to help others.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
