Backend Development 25 min read

Mastering Go Concurrency: Goroutines, Channels, and Synchronization Explained

This article provides a comprehensive guide to Go's concurrency model, covering goroutine creation, the scheduler, synchronization primitives such as WaitGroup, atomic operations, mutexes, and both unbuffered and buffered channels, with practical code examples and explanations of race conditions and best practices.

Raymond Ops
Raymond Ops
Raymond Ops
Mastering Go Concurrency: Goroutines, Channels, and Synchronization Explained

Preface

Learning Go, so this article is written.

It is a note from the book "Go Language in Practice".

Author's notes are quoted.

Readers are invited to point out misunderstandings.

Concurrency

In Go, concurrency is supported directly by the language and runtime. A goroutine runs independently. The

go

keyword creates a goroutine to run a function. Goroutine execution is scheduled on logical processors, which map to OS threads and run queues.

The scheduler decides at any moment which goroutine runs on which logical processor. It can preempt long‑running goroutines to give others a chance.

Go's concurrency model is based on CSP (Communicating Sequential Processes). Communication via channels transfers data between goroutines instead of locking shared memory.

Using channels makes concurrent programs easier and less error‑prone.

Concurrency vs Parallelism

OS schedules threads on physical CPUs, while Go runtime schedules goroutines on logical processors. Since Go 1.5 each physical CPU gets a logical processor by default; earlier versions used a single logical processor.

Parallel execution requires multiple logical processors and physical CPUs.

Race Condition

When two or more goroutines access the same resource without synchronization, a race condition occurs.

Atomic functions and mutexes can prevent races.

Atomic Functions

<code>package main

import (
    "fmt"
    "runtime"
    "sync"
    "sync/atomic"
)

var (
    counter int64
    wg      sync.WaitGroup
)

func main() {
    wg.Add(2)
    go incCounter(1)
    go incCounter(2)
    wg.Wait()
    fmt.Println("Final Counter:", counter)
}

func incCounter(id int) {
    defer wg.Done()
    for i := 0; i < 2; i++ {
        atomic.AddInt64(&counter, 1)
        runtime.Gosched()
    }
}
</code>

Atomic functions guarantee that only one goroutine performs the operation at a time.

Mutex

<code>package main

import (
    "fmt"
    "runtime"
    "sync"
)

var (
    counter int
    wg      sync.WaitGroup
    mutex   sync.Mutex
)

func main() {
    wg.Add(2)
    go incCounter(1)
    go incCounter(2)
    wg.Wait()
    fmt.Printf("Final Counter: %d\n", counter)
}

func incCounter(id int) {
    defer wg.Done()
    for i := 0; i < 2; i++ {
        mutex.Lock()
        value := counter
        runtime.Gosched()
        value++
        counter = value
        mutex.Unlock()
    }
}
</code>

Channels

Channels provide a safe way to share data between goroutines. They are created with

make(chan Type)

for unbuffered or

make(chan Type, capacity)

for buffered channels.

Sending uses

ch <- value

; receiving uses

value := <-ch

. Unbuffered channels synchronize sender and receiver; buffered channels allow asynchronous communication up to the buffer size.

Unbuffered Channels

Both sender and receiver must be ready; otherwise the goroutine blocks.

Scheduler diagram
Scheduler diagram

Example: a tennis match simulated with two goroutines exchanging a ball via an unbuffered channel.

<code>package main

import (
    "fmt"
    "math/rand"
    "sync"
    "time"
)

var wg sync.WaitGroup

func init() {
    rand.Seed(time.Now().UnixNano())
}

func main() {
    court := make(chan int)
    wg.Add(2)
    go player("Nadal", court)
    go player("Djokovic", court)
    court <- 1
    wg.Wait()
}

func player(name string, court chan int) {
    defer wg.Done()
    for {
        ball, ok := <-court
        if !ok {
            fmt.Printf("Player %s Won\n", name)
            return
        }
        if rand.Intn(100)%13 == 0 {
            fmt.Printf("Player %s Missed\n", name)
            close(court)
            return
        }
        fmt.Printf("Player %s Hit %d\n", name, ball)
        ball++
        court <- ball
    }
}
</code>

Buffered Channels

Buffered channels can store values before they are received. Sending blocks only when the buffer is full; receiving blocks only when the buffer is empty.

<code>package main

import (
    "fmt"
    "math/rand"
    "sync"
    "time"
)

const (
    numberGoroutines = 4
    taskLoad         = 10
)

var wg sync.WaitGroup

func init() {
    rand.Seed(time.Now().Unix())
}

func main() {
    tasks := make(chan string, taskLoad)
    wg.Add(numberGoroutines)
    for i := 1; i <= numberGoroutines; i++ {
        go worker(tasks, i)
    }
    for i := 1; i <= taskLoad; i++ {
        tasks <- fmt.Sprintf("Task : %d", i)
    }
    close(tasks)
    wg.Wait()
}

func worker(tasks chan string, worker int) {
    defer wg.Done()
    for {
        task, ok := <-tasks
        if !ok {
            fmt.Printf("Worker %d: finished\n", worker)
            return
        }
        fmt.Printf("Worker %d: start %s\n", worker, task)
        time.Sleep(time.Duration(rand.Int63n(100)) * time.Millisecond)
        fmt.Printf("Worker %d: done %s\n", worker, task)
    }
}
</code>
Unbuffered vs Buffered channel behavior
Unbuffered vs Buffered channel behavior

These examples demonstrate how Go's concurrency primitives—goroutines, WaitGroup, atomic operations, mutexes, and channels—can be combined to build efficient, safe concurrent programs.

concurrencyGoSynchronizationgoroutineRace ConditionChannels
Raymond Ops
Written by

Raymond Ops

Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.