Understanding Distributed Locks: Concepts, System Classification, and Implementations with Redis and etcd/Zookeeper
This article explains the fundamentals of distributed locks, compares lock implementations based on asynchronous replication and Paxos protocols, and provides practical Redis and etcd/Zookeeper examples—including exclusive and shared lock mechanisms, code snippets, and usage considerations for reliability and safety.
Distributed locks are a crucial primitive in distributed environments, ensuring mutual exclusion when multiple processes access shared resources.
System Classification
Based on lock safety, distributed locks can be divided into two categories:
Systems using asynchronous replication (e.g., Redis, MySQL). These may lose locks and typically rely on TTL mechanisms for short‑lived, fault‑tolerant tasks.
Systems using Paxos‑based consensus (e.g., etcd, Zookeeper). These provide higher safety through lease mechanisms, suitable for long‑running locks where loss is unacceptable.
Redis‑Based Distributed Lock
To acquire a lock atomically, use the SET command with NX and EX options: SET lock_name value NX EX lock_time Key options: EX seconds: set key expiration time in seconds. NX: set only if the key does not exist (equivalent to SETNX).
Release the lock by deleting the key: DEL lock_name To avoid accidental deletion of another process's lock, include a unique identifier (UUID) when setting the lock: SET lock_name uuid NX EX lock_time When releasing, compare the stored UUID with the caller's UUID atomically, often using a Lua script.
etcd/Zookeeper Based Distributed Lock
Two lock types are supported:
Exclusive lock (write lock) : Only one client can hold the lock at a time. Clients create a temporary node under /exclusive_lock; the first successful creator obtains the lock.
Shared lock (read lock) : Multiple clients can hold the lock simultaneously for reading, but a write lock requires exclusive access.
Exclusive lock workflow:
Create a temporary node /exclusive_lock/lock1. Only one client succeeds.
Clients that fail watch the /exclusive_lock node for changes.
When the lock node is removed (either due to client crash or normal release), all watchers are notified and retry acquisition.
Shared lock workflow (using Zookeeper/etcd sequential nodes):
Clients create a sequential temporary node under /shared_lock, e.g., /shared_lock/host1-R-001 for read or /shared_lock/host1-W-001 for write.
Each client determines its position among all child nodes.
Read requests succeed if there is no preceding write node; otherwise they wait.
Write requests succeed only if they hold the smallest sequence number; otherwise they wait.
When a lock node is removed, all watchers are notified and the acquisition process repeats.
Both exclusive and shared locks are released by deleting the corresponding temporary node, which automatically happens if the client crashes (the session expires).
Overall, choosing between TTL‑based Redis locks and consensus‑based etcd/Zookeeper locks depends on the required safety, lock duration, and workload characteristics.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
360 Smart Cloud
Official service account of 360 Smart Cloud, dedicated to building a high-quality, secure, highly available, convenient, and stable one‑stop cloud service platform.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
