Master Go Concurrency: Goroutines, Scheduler, and Synchronization Techniques
This article explains Go's concurrency model, detailing how goroutines are scheduled on logical processors, how to create and manage them, detect and resolve race conditions using atomic operations, mutexes, and channels, and demonstrates practical code examples for each concept.
1. Using Goroutine to Run Programs
1. Go Concurrency vs Parallelism
Go's concurrency allows a function to run independently of others. When a
goroutineis created, the runtime scheduler assigns it to an available logical processor (P) which is bound to an OS thread (M). The scheduler manages goroutine execution time, binds OS threads to logical processors, and maintains runqueues.
Manages all created goroutines and allocates execution time.
Binds OS threads to logical processors.
The scheduler schedules OS threads on physical CPUs, while goroutines are scheduled on logical processors. The three roles are:
M : OS thread (kernel thread).
P : Logical processor, the execution context for goroutines.
G : Goroutine with its own stack and instruction pointer, scheduled by a P.
Each P has a global runqueue. Ready goroutines are placed in this queue and dispatched at scheduling points.
2. Creating Goroutine
Use the
gokeyword to launch a goroutine. Example:
<code>//example1.go
package main
import (
"runtime"
"sync"
"fmt"
)
var (
wg sync.WaitGroup
)
func main() {
// Allocate one logical processor for the scheduler
runtime.GOMAXPROCS(1)
wg.Add(2)
fmt.Printf("Begin Coroutines\n")
go func() {
defer wg.Done()
for count := 0; count < 3; count++ {
for char := 'a'; char < 'a'+26; char++ {
fmt.Printf("%c ", char)
}
}
}()
go func() {
defer wg.Done()
for count := 0; count < 3; count++ {
for char := 'A'; char < 'A'+26; char++ {
fmt.Printf("%c ", char)
}
}
}()
fmt.Printf("Waiting To Finish\n")
wg.Wait()
}
</code>The program sets
runtime.GOMAXPROCS(1), creates two goroutines that print alphabet letters, and waits for them to finish. Output shows the first goroutine completes before the second.
To achieve parallel execution, set two logical processors:
<code>runtime.GOMAXPROCS(2)</code>With two processors, the output interleaves the two goroutine outputs.
When only one logical processor is available, goroutines can be forced to yield using
runtime.Gosched():
<code>//example2.go
package main
import (
"runtime"
"sync"
"fmt"
)
var (
wg sync.WaitGroup
)
func main() {
runtime.GOMAXPROCS(1)
wg.Add(2)
fmt.Printf("Begin Coroutines\n")
go func() {
defer wg.Done()
for count := 0; count < 3; count++ {
for char := 'a'; char < 'a'+26; char++ {
if char == 'k' {
runtime.Gosched()
}
fmt.Printf("%c ", char)
}
}
}()
go func() {
defer wg.Done()
for count := 0; count < 3; count++ {
for char := 'A'; char < 'A'+26; char++ {
if char == 'K' {
runtime.Gosched()
}
fmt.Printf("%c ", char)
}
}
}()
fmt.Printf("Waiting To Finish\n")
wg.Wait()
}
</code>This causes the goroutines to alternate execution.
2. Handling Race Conditions
Concurrent programs may encounter race conditions when multiple goroutines access the same resource without synchronization. Example:
<code>//example3.go
package main
import (
"sync"
"runtime"
"fmt"
)
var (
counter int64
wg sync.WaitGroup
)
func addCount() {
defer wg.Done()
for count := 0; count < 2; count++ {
value := counter
runtime.Gosched()
value++
counter = value
}
}
func main() {
wg.Add(2)
go addCount()
go addCount()
wg.Wait()
fmt.Printf("counter: %d\n", counter)
}
</code>Running this program may produce
counter: 4or
counter: 2due to unsynchronized reads and writes.
Solutions include:
Using atomic functions.
Using a mutex to protect critical sections.
Using channels.
1. Detecting Race Conditions
Go provides a race detector. Compile with
go build -raceand run the program.
go build -race example4.go ./example4
The detector reports the lines where the race occurs.
2. Using Atomic Functions
Atomic operations provide safe concurrent access to integers and pointers. Example using
atomic.AddInt64:
<code>//example5.go (atomic version)
package main
import (
"sync"
"runtime"
"fmt"
"sync/atomic"
)
var (
counter int64
wg sync.WaitGroup
)
func addCount() {
defer wg.Done()
for count := 0; count < 2; count++ {
atomic.AddInt64(&counter, 1)
runtime.Gosched()
}
}
func main() {
wg.Add(2)
go addCount()
go addCount()
wg.Wait()
fmt.Printf("counter: %d\n", counter)
}
</code>Other useful atomic functions include
atomic.StoreInt64and
atomic.LoadInt64.
3. Using Mutex
A mutex can protect a critical section. Example:
<code>//example5.go (mutex version)
package main
import (
"sync"
"runtime"
"fmt"
)
var (
counter int
wg sync.WaitGroup
mutex sync.Mutex
)
func addCount() {
defer wg.Done()
for count := 0; count < 2; count++ {
mutex.Lock()
value := counter
runtime.Gosched()
value++
counter = value
mutex.Unlock()
}
}
func main() {
wg.Add(2)
go addCount()
go addCount()
wg.Wait()
fmt.Printf("counter: %d\n", counter)
}
</code>Only one goroutine can enter the locked section at a time.
In Go, channels are often the preferred way to avoid race conditions.
3. Sharing Data with Channels
Go follows the CSP model, using channels (
chan) to pass data between goroutines. Channels are created with
make:
<code>unbuffered := make(chan int) // unbuffered channel for int
buffered := make(chan string, 10) // buffered channel for string
buffered <- "hello world"
value := <-buffered
</code>Unbuffered channels synchronize send and receive; buffered channels allow storing multiple values.
1. Unbuffered Channels
Unbuffered channels block the sender until a receiver is ready and vice versa. Example simulating a tennis match:
<code>//example6.go
package main
import (
"sync"
"fmt"
"math/rand"
"time"
)
var wg sync.WaitGroup
func player(name string, court chan int) {
defer wg.Done()
for {
ball, ok := <-court
if !ok {
fmt.Printf("Player %s Won\n", name)
return
}
n := rand.Intn(100)
if n%13 == 0 {
fmt.Printf("Player %s Missed\n", name)
close(court)
return
}
fmt.Printf("Player %s Hit %d\n", name, ball)
ball++
court <- ball
}
}
func main() {
rand.Seed(time.Now().Unix())
court := make(chan int)
wg.Add(2)
go player("candy", court)
go player("luffic", court)
court <- 1
wg.Wait()
}
</code>2. Buffered Channels
Buffered channels hold values until they are received. Sending to a closed channel panics, while receiving from a closed channel yields remaining values. The sender should close the channel.
Summary
Goroutine execution is managed by logical processors, each with its own OS thread and runqueue.
Multiple goroutines can run concurrently on a single logical processor; parallelism requires multiple physical cores.
Goroutines are created with the
gokeyword.
Race conditions occur when goroutines access shared resources without synchronization.
Mutexes or atomic functions can prevent race conditions.
Channels provide a better solution for safe data sharing.
Unbuffered channels are synchronous; buffered channels are not.
Raymond Ops
Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.