Backend Development 13 min read

Mastering Zookeeper Distributed Locks: From Seckill to Read‑Write Locks

This article explains how Zookeeper’s distributed lock mechanisms—including non‑fair, fair, and read‑write locks—can prevent overselling in high‑traffic seckill scenarios, details their advantages and drawbacks, and provides practical Curator‑based Java implementations with code examples.

Ops Development Stories
Ops Development Stories
Ops Development Stories
Mastering Zookeeper Distributed Locks: From Seckill to Read‑Write Locks

Distributed Lock Scenarios

Seckill Scenario Example

In a flash‑sale (seckill) scenario we must prevent inventory overselling or duplicate charging, so a distributed lock is typically used to avoid data inconsistency caused by concurrent access to shared resources.

Taking a mobile‑seckill as an example, the purchase process usually includes three steps:

Deduct the product inventory.

Create the order.

User payment.

We can lock the inventory when a user accesses the "place order" link, perform the deduction and related operations, then release the lock so the next user can proceed, reducing DB rollbacks. The workflow is illustrated below:

Zookeeper distributed lock seckill scenario
Zookeeper distributed lock seckill scenario

Note: The lock granularity should be balanced according to specific requirements.

Three Types of Distributed Locks

Zookeeper implements distributed locks by leveraging two features:

A node cannot be created repeatedly.

The Watcher notification mechanism.

Non‑Fair Lock

The acquisition process for a non‑fair lock is shown below.

Zookeeper non‑fair lock
Zookeeper non‑fair lock

Pros and Cons

Pros: Simple implementation, notification mechanism, fast response, similar to

ReentrantLock

; node deletion failures are handled by session timeout.

Cons: Heavyweight and can cause a "thundering herd" problem when many watchers are triggered on node deletion.

The "thundering herd" occurs when a node deletion triggers a large number of watcher callbacks, which is harmful to the Zookeeper cluster.

Mitigating the thundering herd:

Abstract the lock as a directory; multiple threads create temporary sequential nodes under it.

Create a sequential node, then obtain the smallest node in the directory. If it is the current node, the lock is acquired; otherwise, acquisition fails.

If acquisition fails, watch the predecessor node; when that node is deleted, the current node is notified.

When unlocking, delete the node to notify the next waiting node.

Fair Lock

To address the drawbacks of the non‑fair lock, the fair lock uses sequential nodes to ensure orderly acquisition, reducing server pressure.

Zookeeper fair lock
Zookeeper fair lock

Pros and Cons

Pros: Using temporary sequential nodes avoids concurrent lock contention and eases server load.

Cons: In read‑write scenarios it cannot guarantee consistency if reads also acquire the lock, leading to performance degradation; a read‑write lock (e.g.,

ReadWriteLock

) is needed.

Read‑Write Lock Implementation

A read‑write lock allows concurrent reads while writes obtain exclusive access. When acquiring a write lock, it must wait for the last read lock to finish.

Read request: if the current lock is a read lock, proceed without waiting; if a write lock exists, wait for the last write lock.

Write request: use the same

Watcher

mechanism as a mutex to listen for preceding nodes.

Zookeeper read‑write lock
Zookeeper read‑write lock

Distributed Lock Practical Use

Environment for the source code:

JDK 1.8

,

Zookeeper 3.6.x

.

Curator Component Implementation

POM Dependencies

<code>&lt;dependency&gt;
  &lt;groupId&gt;org.apache.curator&lt;/groupId&gt;
  &lt;artifactId&gt;curator-framework&lt;/artifactId&gt;
  &lt;version&gt;2.13.0&lt;/version&gt;
&lt;/dependency&gt;
&lt;dependency&gt;
  &lt;groupId&gt;org.apache.curator&lt;/groupId&gt;
  &lt;artifactId&gt;curator-recipes&lt;/artifactId&gt;
  &lt;version&gt;2.13.0&lt;/version&gt;
&lt;/dependency&gt;</code>

Mutex Usage

Because the non‑fair lock suffers from the thundering herd effect, it is not the best choice in Zookeeper. Below is a simulated seckill example using a Zookeeper distributed lock.

<code>public class MutexTest {
    static ExecutorService executor = Executors.newFixedThreadPool(8);
    static AtomicInteger stock = new AtomicInteger(3);
    public static void main(String[] args) throws InterruptedException {
        CuratorFramework client = getZkClient();
        String key = "/lock/lockId_111/111";
        final InterProcessMutex mutex = new InterProcessMutex(client, key);
        for (int i = 0; i < 99; i++) {
            executor.submit(() -> {
                if (stock.get() < 0) {
                    System.err.println("库存不足, 直接返回");
                    return;
                }
                try {
                    boolean acquire = mutex.acquire(200, TimeUnit.MILLISECONDS);
                    if (acquire) {
                        int s = stock.decrementAndGet();
                        if (s < 0) {
                            System.err.println("进入秒杀,库存不足");
                        } else {
                            System.out.println("购买成功, 剩余库存: " + s);
                        }
                    }
                } catch (Exception e) {
                    e.printStackTrace();
                } finally {
                    try {
                        if (mutex.isAcquiredInThisProcess())
                            mutex.release();
                    } catch (Exception e) {
                        e.printStackTrace();
                    }
                }
            });
        }
        while (true) {
            if (executor.isTerminated()) {
                executor.shutdown();
                System.out.println("秒杀完毕剩余库存为:" + stock.get());
            }
            TimeUnit.MILLISECONDS.sleep(100);
        }
    }
    private static CuratorFramework getZkClient() {
        String zkServerAddress = "127.0.0.1:2181";
        ExponentialBackoffRetry retryPolicy = new ExponentialBackoffRetry(1000, 3, 5000);
        CuratorFramework zkClient = CuratorFrameworkFactory.builder()
                .connectString(zkServerAddress)
                .sessionTimeoutMs(5000)
                .connectionTimeoutMs(5000)
                .retryPolicy(retryPolicy)
                .build();
        zkClient.start();
        return zkClient;
    }
}</code>

Read‑Write Lock Usage

Read‑write locks can ensure strong consistency for cache double‑writes because reads are lock‑free unless a write lock is present.

<code>public class ReadWriteLockTest {
    static ExecutorService executor = Executors.newFixedThreadPool(8);
    static AtomicInteger stock = new AtomicInteger(3);
    static InterProcessMutex readLock;
    static InterProcessMutex writeLock;
    public static void main(String[] args) throws InterruptedException {
        CuratorFramework client = getZkClient();
        String key = "/lock/lockId_111/1111";
        InterProcessReadWriteLock readWriteLock = new InterProcessReadWriteLock(client, key);
        readLock = readWriteLock.readLock();
        writeLock = readWriteLock.writeLock();
        for (int i = 0; i < 16; i++) {
            executor.submit(() -> {
                try {
                    boolean read = readLock.acquire(2000, TimeUnit.MILLISECONDS);
                    if (read) {
                        int num = stock.get();
                        System.out.println("读取库存,当前库存为: " + num);
                        if (num < 0) {
                            System.err.println("库存不足, 直接返回");
                            return;
                        }
                    }
                } catch (Exception e) {
                    e.printStackTrace();
                } finally {
                    if (readLock.isAcquiredInThisProcess()) {
                        try { readLock.release(); } catch (Exception e) { e.printStackTrace(); }
                    }
                }
                try {
                    boolean acquire = writeLock.acquire(2000, TimeUnit.MILLISECONDS);
                    if (acquire) {
                        int s = stock.get();
                        if (s <= 0) {
                            System.err.println("进入秒杀,库存不足");
                        } else {
                            s = stock.decrementAndGet();
                            System.out.println("购买成功, 剩余库存: " + s);
                        }
                    }
                } catch (Exception e) {
                    e.printStackTrace();
                } finally {
                    try { if (writeLock.isAcquiredInThisProcess()) writeLock.release(); } catch (Exception e) { e.printStackTrace(); }
                }
            });
        }
        while (true) {
            if (executor.isTerminated()) {
                executor.shutdown();
                System.out.println("秒杀完毕剩余库存为:" + stock.get());
            }
            TimeUnit.MILLISECONDS.sleep(100);
        }
    }
    private static CuratorFramework getZkClient() {
        String zkServerAddress = "127.0.0.1:2181";
        ExponentialBackoffRetry retryPolicy = new ExponentialBackoffRetry(1000, 3, 5000);
        CuratorFramework zkClient = CuratorFrameworkFactory.builder()
                .connectString(zkServerAddress)
                .sessionTimeoutMs(5000)
                .connectionTimeoutMs(5000)
                .retryPolicy(retryPolicy)
                .build();
        zkClient.start();
        return zkClient;
    }
}</code>

The output starts with eight lines showing "读取库存,当前库存为: 3" and then sequentially prints purchase successes and stock depletion messages.

Choosing a Distributed Lock

Both Redis and Zookeeper provide distributed locks. Redis can handle tens of thousands of TPS and is recommended for large‑scale high‑concurrency scenarios, while Zookeeper is suitable for less demanding concurrency requirements.

References

https://www.cnblogs.com/leeego-123/p/12162220.html

http://curator.apache.org/

https://blog.csdn.net/hosaos/article/details/89521537

JavaConcurrencyZookeeperDistributed LockCurator
Ops Development Stories
Written by

Ops Development Stories

Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.