Backend Development 13 min read

Master Go Concurrency: Mutex, RWMutex, Cond, Atomic, Once & WaitGroup Explained

This article explores Go's built‑in concurrency primitives—including Mutex, RWMutex, Condition variables, atomic operations, sync.Once, and WaitGroup—detailing their purposes, usage patterns, and best‑practice guidelines to write correct and efficient concurrent programs.

Architecture Development Notes
Architecture Development Notes
Architecture Development Notes
Master Go Concurrency: Mutex, RWMutex, Cond, Atomic, Once & WaitGroup Explained

In modern software development, concurrent programming is essential, and Go provides powerful built‑in support, but shared‑resource contention must be managed carefully.

Challenges of Concurrent Programming

Without proper coordination, goroutines can race for the same resources, leading to data inconsistency, crashes, or security issues, much like two cooks reaching for the same pot.

Mutex: The Guardian of Exclusive Access

A Mutex ensures that only one goroutine can enter a critical section at a time.

<code>import (
    "fmt"
    "sync"
)

type Counter struct {
    mu    sync.Mutex
    value int
}

func (c *Counter) Increment() {
    c.mu.Lock()
    defer c.mu.Unlock()
    c.value++
}

func (c *Counter) Value() int {
    c.mu.Lock()
    defer c.mu.Unlock()
    return c.value
}

func main() {
    counter := &amp;Counter{}
    var wg sync.WaitGroup
    for i := 0; i < 1000; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            counter.Increment()
        }()
    }
    wg.Wait()
    fmt.Println("Final count:", counter.Value())
}
</code>

The example creates a counter accessed by many goroutines; the mutex prevents data races but can become a bottleneck if overused.

RWMutex: Smart Sharing for Reads

RWMutex allows multiple concurrent reads while writes remain exclusive, improving performance when reads dominate.

<code>import (
    "fmt"
    "sync"
    "time"
)

type DataStore struct {
    rwmu sync.RWMutex
    data map[string]string
}

func (ds *DataStore) Read(key string) string {
    ds.rwmu.RLock()
    defer ds.rwmu.RUnlock()
    return ds.data[key]
}

func (ds *DataStore) Write(key, value string) {
    ds.rwmu.Lock()
    defer ds.rwmu.Unlock()
    ds.data[key] = value
}

func main() {
    store := &amp;DataStore{data: make(map[string]string)}
    // Writer
    go func() {
        for i := 0; i < 10; i++ {
            store.Write(fmt.Sprintf("key%d", i), fmt.Sprintf("value%d", i))
            time.Sleep(100 * time.Millisecond)
        }
    }()
    // Readers
    for i := 0; i < 100; i++ {
        go func(n int) {
            for j := 0; j < 10; j++ {
                key := fmt.Sprintf("key%d", n%10)
                value := store.Read(key)
                fmt.Printf("Read %s: %s\n", key, value)
                time.Sleep(10 * time.Millisecond)
            }
        }(i)
    }
    time.Sleep(2 * time.Second)
}
</code>

Condition Variable (Cond): Waiting for Specific States

Cond works with a Mutex to let goroutines wait until a condition is met, offering Wait(), Signal(), and Broadcast() methods.

Wait(): releases the lock and blocks until notified.

Signal(): wakes one waiting goroutine.

Broadcast(): wakes all waiting goroutines.

Example using a bounded producer‑consumer queue:

<code>import (
    "fmt"
    "sync"
    "time"
)

type Queue struct {
    mu    sync.Mutex
    cond  *sync.Cond
    items []int
    max   int
}

func NewQueue(max int) *Queue {
    q := &amp;Queue{items: make([]int, 0, max), max: max}
    q.cond = sync.NewCond(&amp;q.mu)
    return q
}

func (q *Queue) Produce(item int) {
    q.mu.Lock()
    defer q.mu.Unlock()
    for len(q.items) == q.max {
        q.cond.Wait()
    }
    q.items = append(q.items, item)
    fmt.Printf("Produced: %d\n", item)
    q.cond.Signal()
}

func (q *Queue) Consume() int {
    q.mu.Lock()
    defer q.mu.Unlock()
    for len(q.items) == 0 {
        q.cond.Wait()
    }
    item := q.items[0]
    q.items = q.items[1:]
    fmt.Printf("Consumed: %d\n", item)
    q.cond.Signal()
    return item
}

func main() {
    q := NewQueue(5)
    // Producer
    go func() {
        for i := 0; i < 20; i++ {
            q.Produce(i)
            time.Sleep(100 * time.Millisecond)
        }
    }()
    // Consumer
    go func() {
        for i := 0; i < 20; i++ {
            q.Consume()
            time.Sleep(200 * time.Millisecond)
        }
    }()
    time.Sleep(5 * time.Second)
}
</code>

Atomic Operations: Lock‑Free Concurrency

For simple shared‑variable updates, the sync/atomic package provides lock‑free primitives like AddInt64, LoadInt64, and CompareAndSwap.

<code>import (
    "fmt"
    "sync"
    "sync/atomic"
)

func main() {
    var counter int64
    var wg sync.WaitGroup
    for i := 0; i < 1000; i++ {
        wg.Add(1)
        go func() {
            atomic.AddInt64(&amp;counter, 1)
            wg.Done()
        }()
    }
    wg.Wait()
    fmt.Println("Final count:", atomic.LoadInt64(&amp;counter))
}
</code>

Once: Guaranteeing Single Execution

sync.Once ensures a function runs only once, useful for lazy initialization or singleton patterns.

<code>import (
    "fmt"
    "sync"
)

type Config struct {}

var (
    config     *Config
    configOnce sync.Once
)

func GetConfig() *Config {
    configOnce.Do(func() {
        fmt.Println("Initializing config...")
        config = &amp;Config{}
    })
    return config
}

func main() {
    var wg sync.WaitGroup
    for i := 0; i < 10; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            c := GetConfig()
            fmt.Printf("Config: %p\n", c)
        }()
    }
    wg.Wait()
}
</code>

WaitGroup: Coordinating Goroutine Completion

WaitGroup lets a program wait for a collection of goroutines to finish.

<code>import (
    "fmt"
    "sync"
    "time"
)

func worker(id int, wg *sync.WaitGroup) {
    defer wg.Done()
    fmt.Printf("Worker %d starting\n", id)
    time.Sleep(time.Second)
    fmt.Printf("Worker %d done\n", id)
}

func main() {
    var wg sync.WaitGroup
    for i := 1; i <= 5; i++ {
        wg.Add(1)
        go worker(i, &amp;wg)
    }
    wg.Wait()
    fmt.Println("All workers complete")
}
</code>

Summary and Best Practices

Go provides a rich set of synchronization tools, each suited to specific scenarios:

Use Mutex for exclusive access to shared resources.

Prefer RWMutex when reads dominate writes.

Apply Cond for complex wait/notify patterns.

Leverage atomic operations for simple, high‑performance counters.

Use Once for one‑time initialization.

Employ WaitGroup to synchronize the completion of multiple goroutines.

Key recommendations:

Minimize critical sections to reduce lock contention.

Avoid deadlocks by consistent lock ordering.

Prefer channels for communication‑based concurrency when appropriate.

Consider lock‑free data structures in highly concurrent workloads.

Use defer to release locks reliably.

Thoroughly test concurrent code with Go's race detector.

ConcurrencyGoMutexgoroutinerwmutexsyncAtomic
Architecture Development Notes
Written by

Architecture Development Notes

Focused on architecture design, technology trend analysis, and practical development experience sharing.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.