Scaling Go Backend: From Simple Goroutine to Job/Worker Pools

This article walks through three Go server‑side scaling techniques—starting a goroutine per request, using a buffered channel queue, and implementing a full job/worker pool with separate task and worker channels—complete with code examples and practical considerations for high‑traffic applications.

Go Development Architecture Practice
Go Development Architecture Practice
Go Development Architecture Practice
Scaling Go Backend: From Simple Goroutine to Job/Worker Pools

Per‑request goroutine

The simplest way to handle an HTTP request in Go is to start a new goroutine for each incoming request. The goroutine runs concurrently with the server and can perform any processing before writing the response.

func main() {
    router := gin.Default()
    router.Handle("POST", "/submit", submit)
    router.Run(":8080")
}

func submit(ctx *gin.Context) {
    if err := ctx.Request.ParseForm(); err != nil {
        ctx.String(http.StatusBadRequest, "%s", "failure")
        return
    }
    message := ctx.PostForm("message")
    go func(msg string) {
        fmt.Println("处理上传信息", msg)
    }(message)
    ctx.String(http.StatusOK, "%s", "success")
}

This pattern works for low to moderate traffic. When request handling becomes heavy or traffic spikes, the number of goroutines can grow quickly and exhaust system resources.

Buffered channel queue

To decouple request reception from processing, a buffered channel can act as a queue. The HTTP handler pushes work into the channel, while a separate processor goroutine pulls tasks from the channel and handles them. The buffer size determines how many pending tasks can be stored before producers block.

const MAX_QUEUE = 256

var channel chan string

func init() {
    channel = make(chan string, MAX_QUEUE)
}

func main() {
    go startProcessor()
    router := gin.Default()
    router.Handle("POST", "/submit", submit)
    router.Run(":8080")
}

func submit(ctx *gin.Context) {
    if err := ctx.Request.ParseForm(); err != nil {
        ctx.String(http.StatusBadRequest, "%s", "failure")
        return
    }
    message := ctx.PostForm("message")
    channel <- message // enqueue
    ctx.String(http.StatusOK, "%s", "success")
}

func startProcessor() {
    for {
        select {
        case msg := <-channel:
            fmt.Println("处理上传信息", msg)
        }
    }
}

This approach is suitable when the arrival rate of requests does not exceed the processing rate. If the queue fills up, subsequent requests block, providing natural back‑pressure.

Job/Worker pattern

A more robust solution uses a two‑level channel system: a global job queue and a pool of workers. Each worker has its own job channel and registers itself in a worker‑pool channel. The dispatcher reads jobs from the global queue and assigns them to idle workers, limiting concurrency and improving resilience under high load.

const (
    MAX_QUEUE          = 256
    MAX_WORKER         = 32
    MAX_WORKER_POOL_SIZE = 5
)

var JobQueue chan string

type Worker struct {
    WorkerPool chan chan string // pool of worker job channels
    JobChannel chan string      // channel for this worker's jobs
    quit       chan bool
}

func NewWorker(pool chan chan string) *Worker {
    return &Worker{
        WorkerPool: pool,
        JobChannel: make(chan string),
        quit:       make(chan bool),
    }
}

func (w *Worker) Start() {
    go func() {
        for {
            // register this worker's job channel in the pool
            w.WorkerPool <- w.JobChannel
            select {
            case job := <-w.JobChannel:
                fmt.Println(job) // actual job processing
            case <-w.quit:
                return
            }
        }
    }()
}

func (w *Worker) Stop() {
    go func() { w.quit <- true }()
}

type Dispatcher struct {
    WorkerPool chan chan string
    quit       chan bool
}

func NewDispatcher(maxWorkers int) *Dispatcher {
    pool := make(chan chan string, maxWorkers)
    return &Dispatcher{WorkerPool: pool, quit: make(chan bool)}
}

func (d *Dispatcher) dispatcher() {
    for {
        select {
        case job := <-JobQueue:
            go func(job string) {
                jobChannel := <-d.WorkerPool // get an idle worker's channel
                jobChannel <- job           // assign job
            }(job)
        case <-d.quit:
            return
        }
    }
}

func (d *Dispatcher) Run() {
    for i := 0; i < MAX_WORKER_POOL_SIZE; i++ {
        worker := NewWorker(d.WorkerPool)
        worker.Start()
    }
    go d.dispatcher()
}

func main() {
    JobQueue = make(chan string, MAX_QUEUE)
    dispatcher := NewDispatcher(MAX_WORKER)
    dispatcher.Run()

    router := gin.Default()
    router.Handle("POST", "/submit", submit)
    router.Run(":8080")
}

func submit(ctx *gin.Context) {
    if err := ctx.Request.ParseForm(); err != nil {
        ctx.String(http.StatusBadRequest, "%s", "failure")
        return
    }
    message := ctx.PostForm("message")
    JobQueue <- message // enqueue job
    ctx.String(http.StatusOK, "%s", "success")
}

The dispatcher continuously pulls jobs from JobQueue and hands them to idle workers via the worker‑pool channel. Workers process jobs concurrently up to the configured pool size, providing controlled concurrency and better resilience under high load.

Source: github.com/guishenbumie/MyBlog/wiki

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

BackendconcurrencyGoroutinechannelWorker Pool
Go Development Architecture Practice
Written by

Go Development Architecture Practice

Daily sharing of Golang-related technical articles, practical resources, language news, tutorials, real-world projects, and more. Looking forward to growing together. Let's go!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.