Mastering Distributed Locks in Go: Principles, Implementations, and Pitfalls

This article explains the fundamentals of distributed locks, compares Redis, etcd, ZooKeeper and database approaches, provides practical Go code examples, highlights common mistakes, and offers optimization tips so developers can confidently apply the right locking strategy in real-world systems.

Code Wrench
Code Wrench
Code Wrench
Mastering Distributed Locks in Go: Principles, Implementations, and Pitfalls

What Is a Distributed Lock?

In a single‑process program a sync.Mutex provides mutual exclusion. In a distributed system where multiple instances run on different machines and need to coordinate access to a shared resource, a distributed lock is required. A correct distributed lock must guarantee mutual exclusion, automatic expiration (to avoid deadlocks when a client crashes), and fault tolerance against network partitions.

Common Implementation Schemes in Go

1. Redis‑Based Lock

Redis is the most widely used backend. The lock can be acquired atomically with SET key value NX PX ttl (or the older SETNX). A random value (e.g., a UUID) is stored as the lock value to identify the owner. The lock must be released with a Lua script that deletes the key only if the stored value matches, ensuring that a client does not delete another client’s lock.

ok, err := rdb.SetNX(ctx, "lock:order:123", "uuid-xyz", 10*time.Second).Result()
if err != nil {
    log.Fatal("redis error:", err)
}
if !ok {
    fmt.Println("lock already held")
    return
}
fmt.Println("lock acquired")
processOrder()

// Release with Lua script for atomic check‑and‑delete
script := redis.NewScript(`
    if redis.call("get", KEYS[1]) == ARGV[1] then
        return redis.call("del", KEYS[1])
    else
        return 0
    end`)
_, err = script.Run(ctx, rdb, []string{"lock:order:123"}, "uuid-xyz").Result()
if err != nil {
    log.Fatal("release lock failed:", err)
}

Typical scenario: order creation, payment callbacks.

Common pitfall: forgetting to set the TTL results in a permanent lock.

2. etcd‑Based Lock

etcd provides strong consistency and a lease mechanism. A client creates a lease with a TTL; keys attached to the lease are removed automatically when the lease expires. The Go client library offers concurrency.NewSession (which maintains a heartbeat) and concurrency.NewMutex to implement a lock.

cli, err := clientv3.New(clientv3.Config{
    Endpoints:   []string{"localhost:2379"},
    DialTimeout: 5 * time.Second,
})
if err != nil {
    log.Fatal(err)
}
defer cli.Close()

sess, err := concurrency.NewSession(cli, concurrency.WithTTL(10))
if err != nil {
    log.Fatal(err)
}
mutex := concurrency.NewMutex(sess, "/locks/task-1")

if err := mutex.Lock(ctx); err != nil {
    log.Fatal("acquire lock failed:", err)
}
fmt.Println("etcd lock acquired")
processTask()

if err := mutex.Unlock(ctx); err != nil {
    log.Fatal("release lock failed:", err)
}

Typical scenario: distributed scheduled tasks, leader election.

Advantage: the lock is bound to the lease, so a client crash automatically frees the lock.

3. ZooKeeper‑Based Lock

ZooKeeper implements a fair FIFO lock using temporary sequential znodes and watchers. Each client creates a sequential node under a common prefix (e.g., /locks). The client that holds the node with the smallest sequence number owns the lock; other clients watch the predecessor node and retry when it disappears.

conn, _, err := zk.Connect([]string{"127.0.0.1:2181"}, 5*time.Second)
if err != nil {
    log.Fatal(err)
}
lock := zk.NewLock(conn, "/locks/task-1", zk.WorldACL(zk.PermAll))

if err := lock.Lock(); err != nil {
    log.Fatal("acquire lock failed:", err)
}
fmt.Println("ZooKeeper lock acquired")
processBusiness()

if err := lock.Unlock(); err != nil {
    log.Fatal("release lock failed:", err)
}

Typical scenario: high‑consistency use cases such as financial transactions or flash‑sale systems.

Drawback: deployment is more complex and latency is higher than Redis or etcd.

4. Database‑Based Lock (MySQL)

A MySQL table with a unique index can act as a lock table. Inserting a row with a unique lock_key succeeds only if the lock is free; an ON DUPLICATE KEY UPDATE clause can be used to refresh the expiration.

// Acquire lock (insert or refresh expiration)
_, err := db.ExecContext(ctx, `
    INSERT INTO distributed_lock (lock_key, owner, expire_at)
    VALUES (?, ?, DATE_ADD(NOW(), INTERVAL 10 SECOND))
    ON DUPLICATE KEY UPDATE expire_at = VALUES(expire_at)`,
    "task:abc", "node-1")
if err != nil {
    fmt.Println("acquire lock failed:", err)
    return
}
defer db.ExecContext(ctx,
    "DELETE FROM distributed_lock WHERE lock_key=? AND owner=", "task:abc", "node-1")

processBusiness()

Typical scenario: small‑scale task scheduling.

Limitation: lower throughput and higher risk of deadlocks compared with dedicated coordination services.

Common Mistakes

Missing expiration time – locks never release, causing deadlocks.

Deleting another client’s lock – always verify ownership before calling DEL (or equivalent).

Too coarse lock granularity – a global lock serializes all requests and can overload the system.

Over‑reliance on locks – often a single SQL statement (e.g., row‑level lock) can replace a distributed lock.

-- Example of using a row‑level lock to prevent overselling
UPDATE product SET stock = stock - 1
WHERE id = ? AND stock > 0;

Performance Optimizations & Practical Tips

Lock granularity: use fine‑grained keys such as order:user:123 instead of a single global key.

Short lock hold time: keep only the critical section inside the lock; defer non‑essential work.

Exponential back‑off: when retrying lock acquisition, increase the wait interval to avoid thundering‑herd effects.

Automatic renewal: for long‑running tasks, periodically extend the lease or TTL.

Idempotent fallback: design business logic so that it can safely retry or recover if the lock expires (e.g., unique constraints, state machines).

Comparison of Redis, etcd, and ZooKeeper Locks

The following image summarizes the trade‑offs (performance, consistency, fairness, deployment complexity).

Distributed lock comparison
Distributed lock comparison

Redis: highest throughput and simple deployment; consistency is eventual and fairness is not guaranteed. Suitable for high‑throughput workloads that can tolerate occasional stale reads.

etcd: strong linearizable consistency and automatic expiration via leases; ideal for task scheduling, leader election, and other scenarios requiring strict consistency.

ZooKeeper: provides FIFO fairness and strong consistency; best for use cases that need strict ordering, though deployment is more involved and latency is higher.

Common Exception Handling

Lock expiration: set an appropriate TTL and implement renewal if the critical section exceeds the initial timeout.

Lock loss during failover: in Redis master‑slave failover use the Redlock algorithm or switch to a service with built‑in lease semantics (etcd).

Retry storms: combine rate limiting with exponential back‑off to prevent cascading failures.

Business fallback: ensure operations are idempotent so the system remains eventually consistent even when a lock is lost.

Final Recommendations

Distributed locks should be introduced only after analyzing the actual contention point. Choose the backend that matches the required consistency, latency, and operational complexity (Redis, etcd, ZooKeeper, or a DB). Always configure a TTL, verify ownership before releasing, and make the protected business logic idempotent.

When a lock is unnecessary, avoid adding one; when a lock is required, implement it correctly and efficiently.
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

backendGolangZooKeeperetcddistributed-lock
Code Wrench
Written by

Code Wrench

Focuses on code debugging, performance optimization, and real-world engineering, sharing efficient development tips and pitfall guides. We break down technical challenges in a down-to-earth style, helping you craft handy tools so every line of code becomes a problem‑solving weapon. 🔧💻

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.