What Powers Go’s High Concurrency? An Overview of the M‑P‑G Model
Go achieves high concurrency by using its own M‑P‑G scheduling model, where lightweight goroutines (G) run on logical processors (P) that are mapped onto kernel threads (M), allowing user‑space scheduling that avoids the overhead of OS thread context switches.
What supports Go’s high concurrency?
Modern operating systems provide multi‑process and multi‑thread capabilities, but both have drawbacks: processes are heavyweight resource units, and thread context switches in the kernel are still expensive.
To address these issues, Go introduces its own PMG model (M‑P‑G).
M (machine) represents a kernel thread managed by the operating system and scheduled onto a CPU core.
P (processor) is a logical processor that provides the execution context for code.
G (goroutine) is a lightweight concurrent code fragment.
In short, a G runs on a P, and the P runs on an M.
Unlike OS threads that are scheduled in kernel space, goroutine scheduling occurs in user space with Go’s own scheduler, making goroutines far lighter and more efficient than threads, which is the key reason for Go’s high concurrency.
Recommended reading: "Scheduling In Go" series – https://www.ardanlabs.com/blog/2018/08/scheduling-in-go-part1.html (Chinese translation: https://juejin.im/post/5cdeb6cdf265da1bd605727f)
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
System Architect Go
Programming, architecture, application development, message queues, middleware, databases, containerization, big data, image processing, machine learning, AI, personal growth.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
