Backend Development 19 min read

Why Java Locks Fail in Distributed Systems and How Redis & Zookeeper Fix Them

Java’s built‑in synchronization mechanisms work only within a single JVM, leading to inventory‑oversell problems when scaling an e‑commerce service across multiple machines, so developers turn to distributed lock solutions such as Redis (with RedLock or Redisson) and Zookeeper (using ordered and temporary nodes) to ensure global mutual exclusion.

Java Backend Technology
Java Backend Technology
Java Backend Technology
Why Java Locks Fail in Distributed Systems and How Redis & Zookeeper Fix Them

Why Use Distributed Locks?

Before discussing the problem, consider an e‑commerce scenario: System A runs on a single machine and provides an order‑creation API. Before placing an order, the system checks inventory stored in Redis and updates it when the order is placed.

When two requests arrive simultaneously and the inventory in Redis is 1, both may read the same value and both decrement it, resulting in two orders for a single item – the classic oversell problem .

Using a local lock (e.g.,

synchronized

or

ReentrantLock

) can serialize steps 2‑4 on a single JVM, but once the service is scaled to multiple machines, each JVM has its own lock, so the oversell problem reappears.

Distributed Lock Concept

A distributed lock provides a global, unique lock object that all instances can acquire, ensuring only one instance proceeds at a time.

The lock can be backed by Redis, Zookeeper, or a database.

Redis‑Based Distributed Lock

The simplest approach is to use Redis:

// Acquire lock
// NX succeeds only if the key does not exist; PX sets expiration (ms)
SET anyLock unique_value NX PX 30000

// Release lock via Lua script (atomic)
if redis.call("get", KEYS[1]) == ARGV[1] then
    return redis.call("del", KEYS[1])
else
    return 0
end

Key points:

Use SET key value NX PX milliseconds – the operation must be atomic; otherwise a crash before setting expiration can cause a dead lock.

Value must be unique so that a client only deletes a lock it owns.

Redis can be deployed in three modes: single‑node, master‑slave with Sentinel, or cluster. Single‑node has a single point of failure; master‑slave can lose the lock during failover. The RedLock algorithm (proposed by the Redis author) tries to acquire the lock on a majority of nodes in a cluster to mitigate these issues, though its correctness is debated.

Redisson – A Higher‑Level Redis Client

Redisson implements distributed locks on top of Redis and handles many details automatically, such as using Lua scripts for atomicity and a watchdog that renews the lock’s TTL every 10 seconds.

Config config = new Config();
config.useClusterServers()
    .addNodeAddress("redis://192.168.31.101:7001")
    .addNodeAddress("redis://192.168.31.101:7002")
    .addNodeAddress("redis://192.168.31.101:7003")
    .addNodeAddress("redis://192.168.31.102:7001")
    .addNodeAddress("redis://192.168.31.102:7002")
    .addNodeAddress("redis://192.168.31.102:7003");
RedissonClient redisson = Redisson.create(config);
RLock lock = redisson.getLock("anyLock");
lock.lock();
// business logic
lock.unlock();

Redisson’s watchdog automatically extends the lock’s expiration, preventing accidental release while the holder is still working.

Zookeeper‑Based Distributed Lock

Zookeeper provides ordered and temporary nodes. The typical algorithm is:

Create an EPHEMERAL‑SEQUENTIAL node under a lock directory (e.g.,

/lock

).

List all children of the directory and find the smallest sequence number.

If the created node is the smallest, the client holds the lock.

Otherwise, set a watch on the predecessor node; when it is deleted, repeat the check.

This approach guarantees strong consistency because Zookeeper is designed for distributed coordination.

Curator – A Zookeeper Client Library

Curator simplifies Zookeeper usage and provides an

InterProcessMutex

implementation:

InterProcessMutex lock = new InterProcessMutex(client, "/anyLock");
lock.acquire();
// business logic
lock.release();

Internally Curator creates the EPHEMERAL‑SEQUENTIAL node, watches the predecessor, and deletes the node on release.

Comparison of Redis vs Zookeeper Locks

Redis :

Very high performance, suitable for high‑throughput lock/unlock cycles.

Lock acquisition is usually a busy‑wait loop, consuming CPU.

Data is eventually consistent; in rare edge cases the lock may be unsafe.

Cluster mode with RedLock mitigates single‑point failures but does not guarantee 100 % correctness.

Zookeeper :

Strong consistency and built‑in coordination primitives make the lock model robust.

Clients wait on a watch instead of polling, reducing load.

High read/write load on the Zookeeper ensemble can become a bottleneck.

Recommendation

If a Zookeeper cluster is available, it is generally the safer choice for distributed locks because of its strong consistency guarantees. When only a Redis cluster exists, or when ultra‑low latency is required, Redis (or Redisson) is a practical alternative, provided you understand its limitations.

Ultimately the decision should be based on the existing infrastructure, performance requirements, and tolerance for potential edge‑case inconsistencies.

RedisZookeeperDistributed LockRedissonJava concurrencyCurator
Java Backend Technology
Written by

Java Backend Technology

Focus on Java-related technologies: SSM, Spring ecosystem, microservices, MySQL, MyCat, clustering, distributed systems, middleware, Linux, networking, multithreading. Occasionally cover DevOps tools like Jenkins, Nexus, Docker, and ELK. Also share technical insights from time to time, committed to Java full-stack development!

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.