How to Test Concurrent Go Code with the New testing/synctest Package
Go’s built‑in concurrency primitives make parallel programming easy, but testing such code is tricky; this article explains the experimental testing/synctest package introduced in Go 1.24/1.25, shows how to rewrite flaky, slow tests into fast, reliable ones, and demonstrates its use with real‑world examples.
Go 1.25 introduced the experimental testing/synctest package for testing concurrent code. This article, translated from the official Go blog, explains why testing concurrency is difficult and shows how to use synctest to make tests fast, reliable, and deterministic.
Testing concurrent programs is hard
Consider a simple test that uses context.AfterFunc to run a function after a context is cancelled. The original test checks that the function is not called before cancellation and is called after cancellation, but it relies on a real time.After delay, making it slow and flaky.
func TestAfterFunc(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
calledCh := make(chan struct{})
context.AfterFunc(ctx, func() { close(calledCh) })
// TODO: Assert that the AfterFunc has not been called.
cancel()
// TODO: Assert that the AfterFunc has been called.
}Because the test waits for a real timeout, it becomes slow (10 ms per run) and unstable on CI systems.
Introducing testing/synctest
The testing/synctest package provides two functions, Run and Wait. Run starts a function in a new goroutine inside an isolated “bubble”. Wait blocks until every goroutine in that bubble is either blocked on a synchronisation point or has finished.
Rewriting the previous test with synctest eliminates the real‑time wait:
func TestAfterFunc(t *testing.T) {
synctest.Run(func() {
ctx, cancel := context.WithCancel(context.Background())
funcCalled := false
context.AfterFunc(ctx, func() { funcCalled = true })
synctest.Wait()
if funcCalled {
t.Fatalf("AfterFunc called before cancel")
}
cancel()
synctest.Wait()
if !funcCalled {
t.Fatalf("AfterFunc not called after cancel")
}
})
}The test is now both fast and stable because the bubble’s virtual clock advances automatically when all goroutines are blocked.
Testing time‑dependent code
When code depends on timers, real sleeps cause slowness. Inside a bubble, the time package operates on a virtual clock that moves forward only when goroutines are blocked. The following example tests context.WithTimeout without real delays:
func TestWithTimeout(t *testing.T) {
synctest.Run(func() {
const timeout = 5 * time.Second
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
time.Sleep(timeout - time.Nanosecond)
synctest.Wait()
if err := ctx.Err(); err != nil {
t.Fatalf("before timeout, ctx.Err() = %v; want nil", err)
}
time.Sleep(time.Nanosecond)
synctest.Wait()
if err := ctx.Err(); err != context.DeadlineExceeded {
t.Fatalf("after timeout, ctx.Err() = %v; want DeadlineExceeded", err)
}
})
}Blocking and bubbles
A bubble is “durably blocked” when every goroutine inside it is blocked on an operation that cannot be unblocked from outside the bubble. In that state:
If there is a pending Wait call, it returns.
Otherwise time advances to the next point that can unblock a goroutine.
If no such point exists, the bubble panics.
Operations that cause durable blocking include:
Sending or receiving on a nil channel.
Sending or receiving on a channel created inside the bubble.
A select where every case is durably blocked. time.Sleep. sync.Cond.Wait. sync.WaitGroup.Wait.
Mutexes ( sync.Mutex) do not cause durable blocking; they may block briefly but can be released by a goroutine outside the bubble.
Channels
Channels created inside a bubble only block durably when used inside the same bubble. Accessing a bubble‑internal channel from outside causes a panic, ensuring that communication stays within the bubble.
I/O
External I/O (e.g., network reads) never durably blocks a bubble. To test network code, a fake network such as net.Pipe can be used together with synctest to verify that goroutines are properly synchronized.
func Test(t *testing.T) {
synctest.Run(func() {
srvConn, cliConn := net.Pipe()
defer srvConn.Close()
defer cliConn.Close()
tr := &http.Transport{ExpectContinueTimeout: 5 * time.Second, DialContext: func(ctx context.Context, network, address string) (net.Conn, error) { return cliConn, nil }}
body := "request body"
go func() {
req, _ := http.NewRequest("PUT", "http://test.tld/", strings.NewReader(body))
req.Header.Set("Expect", "100-continue")
resp, err := tr.RoundTrip(req)
if err != nil { t.Errorf("RoundTrip: %v", err) } else { resp.Body.Close() }
}()
req, err := http.ReadRequest(bufio.NewReader(srvConn))
if err != nil { t.Fatalf("ReadRequest: %v", err) }
var gotBody strings.Builder
go io.Copy(&gotBody, req.Body)
synctest.Wait()
if got := gotBody.String(); got != "" {
t.Fatalf("before 100 Continue, unexpected body: %q", got)
}
srvConn.Write([]byte("HTTP/1.1 100 Continue
"))
synctest.Wait()
if got := gotBody.String(); got != body {
t.Fatalf("after 100 Continue, body %q, want %q", got, body)
}
srvConn.Write([]byte("HTTP/1.1 200 OK
"))
})
}Bubble lifecycle
Runreturns only after every goroutine in the bubble has exited. If the bubble becomes durably blocked with no way to advance time, Run panics. Therefore tests must clean up any background goroutines before Run finishes.
Experimental status
The testing/synctest package is experimental in Go 1.24. It is hidden by default; enable it by setting the environment variable GOEXPERIMENT=synctest when compiling. Feedback can be submitted at go.dev/issue/67434 .
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Go Programming World
Mobile version of tech blog https://jianghushinian.cn/, covering Golang, Docker, Kubernetes and beyond.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
