Backend Development 13 min read

Concurrent Safety of Go Maps: Issues, Solutions, and Performance Comparison

Go maps are not safe for concurrent access, so programs can panic when multiple goroutines read and write the same map; to prevent this you can use sync.Once for immutable data, protect maps with sync.RWMutex, employ sharded locks via concurrent‑map, or use the built‑in sync.Map, each offering different performance trade‑offs depending on read/write ratios and concurrency level.

37 Interactive Technology Team
37 Interactive Technology Team
37 Interactive Technology Team
Concurrent Safety of Go Maps: Issues, Solutions, and Performance Comparison

When developing with Go, a common but hard‑to‑detect problem is that a program may run fine in tests, pass static analysis, and still panic intermittently in production. The panic is caused by multiple goroutines reading and writing the same map concurrently, which triggers Go's runtime detection of a race condition and aborts the program.

Because a Go map is not safe for concurrent access, simultaneous reads and writes can produce a partially‑written map that other goroutines read, leading to undefined behavior. The runtime calls runtime.throw() , which cannot be recovered, causing the pod to restart.

Why Map Concurrency Safety Matters

When several goroutines modify the same map, one goroutine may read the map while another is halfway through a write, resulting in a “half‑finished” state. This demonstrates that the built‑in map type lacks built‑in concurrency protection.

Concurrency safety means that shared resources remain correct and consistent when accessed from multiple threads or goroutines. Without it, data corruption, incorrect results, or crashes can occur.

In Go, types such as map , slice , and custom struct s are not inherently concurrent‑safe. The runtime will panic if it detects concurrent map access.

How to Solve Map Concurrency Issues

1. Use Go’s Concurrency Primitives

If the data is immutable after initialization, sync.Once can guarantee that initialization runs only once.

type User struct {
    Name       string
    Other      map[string]interface{}
    ConfigOnce sync.Once
}

func (u *User) InitConfigOnce(name string, other map[string]interface{}) *User {
    u.ConfigOnce.Do(func() {
        fmt.Println("ok")
        u.Name = name
        u.Other = other
    })
    return u
}

func (u *User) GetUserConfig() {
    fmt.Println(u)
}

Calling code:

func main() {
    var u User
    var wg sync.WaitGroup
    num := 30
    wg.Add(num)
    for i := 0; i < num; i++ {
        go func() {
            u.InitConfigOnce("yzb", map[string]interface{}{ "age": 18 })
            u.GetUserConfig()
            wg.Done()
        }()
    }
    wg.Wait()
}

2. Add a Read‑Write Mutex (RWMutex)

When reads dominate writes, a sync.RWMutex gives better performance than a plain mutex.

type Mmap struct {
    Data map[string]interface{}
    Mu   sync.RWMutex
}

func InitMmap() *Mmap {
    return &Mmap{Data: make(map[string]interface{})}
}

func (m *Mmap) Get(name string) interface{} {
    m.Mu.RLock()
    defer m.Mu.RUnlock()
    return m.Data[name]
}

func (m *Mmap) Set(data map[string]interface{}) {
    m.Mu.Lock()
    defer m.Mu.Unlock()
    for k, v := range data {
        m.Data[k] = v
    }
}

func (m *Mmap) SetOne(key, val string) {
    m.Mu.Lock()
    defer m.Mu.Unlock()
    m.Data[key] = val
}

Calling code:

func main() {
    c := InitMmap()
    for i := 0; i < 30; i++ {
        go func() { c.SetOne("name", "yzb") }()
        go func() { fmt.Println(c.Get("name")) }()
    }
}

3. Sharding (Partitioned Locks)

Instead of a single large lock, split the map into many shards, each protected by its own lock. This reduces lock contention under high concurrency. The open‑source library concurrent‑map implements this idea.

import (
    cmap "github.com/orcaman/concurrent-map"
    "sync"
)

type cmapConfig struct {
    Cmap cmap.ConcurrentMap
}

func InitCmap() *cmapConfig {
    return &cmapConfig{cmap.New()}
}

func (c *cmapConfig) Set(config map[string]interface{}) {
    for k, v := range config {
        c.Cmap.Set(k, v)
    }
}

func (c *cmapConfig) Get(k string) interface{} {
    v, ok := c.Cmap.Get(k)
    if ok { return v }
    return nil
}

4. Go’s Native Concurrent Map (sync.Map)

The standard library provides sync.Map , which is optimized for two scenarios:

Read‑many, write‑once caches (key written once, read many times).

Multiple goroutines accessing disjoint key sets.

Implementation details: sync.Map maintains a read‑only map and a dirty map. Writes go to the dirty map; reads first check the read‑only map, and on a miss they acquire a lock to consult the dirty map. When the miss count reaches the size of the dirty map, the dirty map is promoted to the read‑only map.

type syncMapConfig struct {
    Smap sync.Map
}

func InitSmap() *syncMapConfig {
    return &syncMapConfig{Smap: sync.Map{}}
}

func (s *syncMapConfig) Set(config map[string]interface{}) {
    for k, v := range config {
        s.Smap.Store(k, v)
    }
}

func (s *syncMapConfig) Get(k string) interface{} {
    if v, ok := s.Smap.Load(k); ok {
        return v
    }
    return nil
}

Performance Comparison

Four approaches were benchmarked under different read/write ratios and concurrency levels (10 vs 1000 goroutines). The abbreviations used are:

Cmap : concurrent‑map (sharded lock).

Smap : sync.Map.

Mmap : map protected by sync.RWMutex .

Key findings:

When reads = writes, sync.Map > concurrent‑map > RWMutex.

When reads ≫ writes, concurrent‑map > sync.Map > RWMutex.

When writes ≫ reads, sync.Map > concurrent‑map > RWMutex.

At low concurrency (10 goroutines), RWMutex often outperforms the other two because lock contention is minimal.

Read‑write lock (RWMutex) has the largest granularity and therefore suffers under high contention.

Final Recommendation

The choice of map‑concurrency strategy should consider both the expected concurrency level and the read/write pattern:

Read‑many / Write‑few

Write‑many / Read‑few

High concurrency

concurrent‑map

sync.Map

Low concurrency

RWMutex map

In summary, for high‑throughput read‑dominant workloads use a sharded map (concurrent‑map); for write‑dominant workloads use sync.Map ; and for low‑traffic scenarios the simple sync.RWMutex protected map is sufficient.

PerformanceConcurrencyGomaprwmutexsyncsync.Map
37 Interactive Technology Team
Written by

37 Interactive Technology Team

37 Interactive Technology Center

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.