Mastering Goroutine Limits: How to Control Go Concurrency Efficiently

Learn how Go's lightweight goroutines work, why unlimited spawning can cause panics, and practical techniques—using sync.WaitGroup, buffered channels, and worker pools—to limit concurrent goroutine numbers safely and efficiently while preserving program correctness and performance.

MaGe Linux Operations
MaGe Linux Operations
MaGe Linux Operations
Mastering Goroutine Limits: How to Control Go Concurrency Efficiently

Goroutine

Goroutine is a lightweight thread managed by the Go runtime. Concurrency in Go is primarily created with the go keyword, and goroutines communicate via channels, which act as pipelines and prevent race conditions when accessing shared memory.

Creating Goroutines

// ordinary function
go functionName(args)

// anonymous function
go func(args) {
    // function body
}(args)

How many goroutines can be started?

Launching an extremely large number, such as math.MaxInt32, results in a panic like “panic: too many concurrent operations on a single file or socket (max 1048575)”, indicating the system limit has been exceeded.

Controlling Goroutine Count

Use sync.WaitGroup to start a specified number of goroutines.

func testRoutine() {
    var wg = sync.WaitGroup{}
    taskCount := 5 // desired concurrency
    for i := 0; i < taskCount; i++ {
        wg.Add(1)
        go func(i int) {
            fmt.Println("go func", i)
            wg.Done()
        }(i)
    }
    wg.Wait()
}

If taskCount is set too high, it still may exceed limits. A pool‑like design using a buffered channel can restrict the maximum number of concurrent goroutines.

func testRoutine() {
    task_chan := make(chan bool, 3) // channel capacity 3
    wg := sync.WaitGroup{}
    defer close(task_chan)
    for i := 0; i < math.MaxInt; i++ {
        wg.Add(1)
        fmt.Println("go func", i)
        task_chan <- true
        go func() {
            <-task_chan
            defer wg.Done()
        }()
    }
    wg.Wait()
}

Create a channel with buffer size 3; sending blocks when the buffer is full, limiting concurrency.

Before launching a goroutine, send a token to the channel; if the buffer is full, the launch blocks.

When a goroutine finishes, it releases the token by receiving from the channel.

All goroutines are waited on via sync.WaitGroup, though the channel alone can also control completion.

Reference: boilingfrog.github.io/2021/04/14/控制goroutine的数量/

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

concurrencyGoroutinesyncchannelWorker Pool
MaGe Linux Operations
Written by

MaGe Linux Operations

Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.