Speed Up Go Cache Expiration Tests with testing/synctest

This article explains how Go's testing/synctest experiment speeds up cache expiration tests by using a virtual clock and bubble isolation, providing code examples that reduce a five‑second wait to milliseconds while ensuring reliable concurrent test execution.

Radish, Keep Going!
Radish, Keep Going!
Radish, Keep Going!
Speed Up Go Cache Expiration Tests with testing/synctest

Testing concurrent code in Go, especially time‑dependent cases, can be slow and nondeterministic when relying on the real system clock.

go‑cache is a popular Go cache library that supports TTL and periodic cleanup of expired entries.

Traditional test TestGoCacheEntryExpires creates a cache with a 5‑second TTL, sets a value, sleeps for 5 seconds, and then verifies the entry has expired, taking over five seconds to run.

func TestGoCacheEntryExpires(t *testing.T) {
    c := cache.New(5*time.Second, 10*time.Second)
    c.Set("foo", "bar", cache.DefaultExpiration)
    v, found := c.Get("foo")
    assert.True(t, found)
    assert.Equal(t, "bar", v)
    time.Sleep(5 * time.Second)
    v, found = c.Get("foo")
    assert.False(t, found)
    assert.Nil(t, v)
}

Using the new testing/synctest experiment introduced in Go 1.24, the same scenario can be tested without real waiting.

func TestGoCacheEntryExpiresWithSynctest(t *testing.T) {
    c := cache.New(2*time.Second, 5*time.Second)
    synctest.Run(func() {
        c.Set("foo", "bar", cache.DefaultExpiration)
        if got, exist := c.Get("foo"); !exist || got != "bar" {
            t.Errorf("c.Get(k) = %v, want %v", got, "bar")
        }
        time.Sleep(1 * time.Second)
        if got, exist := c.Get("foo"); !exist || got != "bar" {
            t.Errorf("c.Get(k) = %v, want %v", got, "bar")
        }
        time.Sleep(3 * time.Second)
        if got, exist := c.Get("foo"); exist {
            t.Errorf("c.Get(k) = %v, want %v", got, nil)
        }
    })
}

The test finishes in 0.009 seconds because the virtual clock advances only when all goroutines are idle.

Unveiling testing/synctest

testing/synctest

simplifies testing of concurrent code by using a virtual clock and “bubbles” (isolated goroutine groups). It provides two functions:

func Run(f func()) { synctest.Run(f) }
func Wait()        { synctest.Wait() }
Run

executes the function in a new goroutine and creates a bubble so that all goroutines inside are driven by the virtual clock. Wait blocks the main goroutine until every goroutine in the bubble reaches the “durably blocked” state.

Sending or receiving on a channel inside a bubble.

Each case in a select statement is a channel inside a bubble.

Blocking system calls or external events do not put a goroutine into the “durably blocked” state, so synctest.Wait does not wait for them.

Each bubble starts its virtual clock at 2000‑01‑01 00:00:00 UTC. The clock only moves forward when all goroutines are idle, allowing time.Sleep to advance instantly in the test above.

Example TestSynctest shows that time does not progress while a heavy computation runs in another goroutine.

func TestSynctest(t *testing.T) {
    synctest.Run(func() {
        before := time.Now()
        fmt.Println("before", before)
        f1 := func() {
            count := 0
            for i := 0; i < 1e10; i++ {
                // time consuming, It's about 3s in my machine
                count++
            }
        }
        go f1()
        synctest.Wait()
        after := time.Now()
        fmt.Println("after", after) // time unchanged
    })
}

The output confirms that the virtual clock remains at the start time, and the test runs quickly.

Reference: https://github.com/golang/go/blob/05d8984781f7cf2f0f39b53699a558b6a1965c6c/src/testing/synctest/synctest.go#L41

TestingConcurrencyGounit testingsynctestgo-cache
Radish, Keep Going!
Written by

Radish, Keep Going!

Personal sharing

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.