Understanding Curator's InterProcessMutex Distributed Lock in Java
This article explains how to replace Redis‑based lock with Curator's InterProcessMutex, detailing its re‑entrant design, node creation, lock acquisition logic, waiting mechanisms, and release process, while highlighting advantages over traditional Thread.sleep approaches in multithreaded resource access.
When developing a project that required multiple threads to acquire a shared resource within a bounded time, the author initially used Redis for locking, where each thread attempted to obtain a common lock and, on failure, repeatedly slept before retrying, leading to either excessive Redis traffic or delayed resource usage.
To address these drawbacks, the author switched to Apache Curator's distributed lock mechanism, which provides a more efficient mutual exclusion solution.
Curator, akin to Guava for Java, offers a rich set of features such as fluent APIs, master election, distributed locks, counters, and barriers, simplifying production‑grade coordination tasks.
The focus is on InterProcessMutex , a re‑entrant distributed lock. Its basic constructor is shown below:
public InterProcessMutex(CuratorFramework client, String path) { this(client, path, LOCK_NAME, 1, new StandardLockInternalsDriver()); }Internally, the constructor creates a LockInternals instance that handles all lock acquisition and release operations.
Lock acquisition is performed by InterProcessMutex.internalLock() , which supports both infinite waiting and bounded waiting modes. A simplified excerpt of the method is:
Thread currentThread = Thread.currentThread();
LockData lockData = threadData.get(currentThread);
if (lockData != null) { lockData.lockCount.incrementAndGet(); return true; }
String lockPath = internals.attemptLock(time, unit, getLockNodeBytes());
if (lockPath != null) { LockData newLockData = new LockData(currentThread, lockPath); threadData.put(currentThread, newLockData); return true; }
return false;If the same thread requests the lock again, the lock count is incremented, demonstrating re‑entrancy. Otherwise, LockInternals.attemptLock() is invoked.
LockInternals.attemptLock() proceeds through several steps: (1) evaluates the timeout, (2) creates a temporary sequential ZNode, and (3) enters a waiting loop to try acquiring the lock. The node creation code is:
ourPath = client.create().creatingParentsIfNeeded().withProtection().withMode(CreateMode.EPHEMERAL_SEQUENTIAL).forPath(path);For example, with a base path /zklock/activityno , Curator creates nodes such as /zklock/activityno/_c_f4a49d75-86f8-40b2-8b9c-d813392aa1db-lock-0000000008 . All requesting threads receive their own sequential node, and Zookeeper guarantees ordering, enabling a fair lock acquisition strategy.
The core lock decision sorts all child nodes, extracts the sequence number, and determines if the current node is within the allowed lease count (typically 1 for a mutex). The relevant code is:
PredicateResults predicateResults = driver.getsTheLock(client, children, sequenceNodeName, maxLeases);
if (predicateResults.getsTheLock()) {
haveTheLock = true;
} else {
// wait using watcher on the predecessor node
client.getData().usingWatcher(watcher).forPath(previousSequencePath);
}If the lock is not obtained, a watcher is set on the predecessor node; when that node changes or is deleted, the waiting thread is notified via notifyFromWatcher() .
The driver’s getsTheLock implementation determines the index of the current node in the sorted list and checks whether it falls within the allowed lease range:
public PredicateResults getsTheLock(CuratorFramework client, List
children, String sequenceNodeName, int maxLeases) throws Exception {
int ourIndex = children.indexOf(sequenceNodeName);
validateOurIndex(sequenceNodeName, ourIndex);
boolean getsTheLock = ourIndex < maxLeases;
String pathToWatch = getsTheLock ? null : children.get(ourIndex - maxLeases);
return new PredicateResults(pathToWatch, getsTheLock);
}Releasing the lock involves verifying that the releasing thread owns the lock and then deleting its temporary ZNode, which Zookeeper automatically removes if the client disconnects.
In summary, Curator’s InterProcessMutex offers multiple lock types (mutex, read‑write, timed mutex), creates sequential nodes to ensure every contender gets a chance, uses wait() / notifyAll() for efficient wake‑ups, avoids wasteful Thread.sleep() , and leverages Zookeeper’s automatic cleanup of ephemeral nodes.
Qunar Tech Salon
Qunar Tech Salon is a learning and exchange platform for Qunar engineers and industry peers. We share cutting-edge technology trends and topics, providing a free platform for mid-to-senior technical professionals to exchange and learn.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.