Backend Development 20 min read

Performance Analysis of Go map Concurrency: sync.Map vs map+RWLock vs concurrent‑map

The article compares Go’s native map (which crashes under concurrent access), a map protected by sync.RWMutex, the lock‑free read‑optimized sync.Map introduced in Go 1.9, and the sharded orcaman/concurrent‑map, showing that sync.Map excels in read‑heavy workloads while sharded maps better handle frequent inserts, and suggesting other cache libraries for eviction or expiration needs.

Tencent Cloud Developer
Tencent Cloud Developer
Tencent Cloud Developer
Performance Analysis of Go map Concurrency: sync.Map vs map+RWLock vs concurrent‑map

Introduction – In a high‑traffic recall‑ranking service the upstream request volume puts heavy pressure on downstream storage. The scenario requires high performance and eventual consistency, so the author evaluates several Go concurrent‑safe key‑value cache libraries.

1. Concurrent Read/Write Test on native Go map

The native map is not safe for concurrent reads and writes, even when keys differ. The following demo shows a data race:

package main
import "time"
func main() { testMapReadWriteDiffKey() }
func testMapReadWriteDiffKey() {
    m := make(map[int]int)
    go func() { for { m[100] = 100 } }()
    go func() { for { _ = m[12] } }()
    select {}
}

The program crashes with a concurrent map read/write error, as illustrated by the screenshot in the original article.

2. map + RWLock

Before sync.Map was introduced, a common pattern was to embed a map together with a sync.RWMutex in an anonymous struct:

var counter = struct {
    sync.RWMutex
    m map[string]int
}{m: make(map[string]int)}

Reading:

counter.RLock()
 n := counter.m["some_key"]
counter.RUnlock()
fmt.Println("some_key:", n)

Writing:

counter.Lock()
 counter.m["some_key"]++
counter.Unlock()

The author asks how sync.Map (introduced in Go 1.9) differs from this approach and what scenarios it suits.

3. sync.Map

Design Overview – sync.Map implements read‑write separation: a lock‑free read map (read‑only) and a lock‑protected dirty map. Reads first consult read ; if the key is missing and dirty may contain it, the implementation locks and checks dirty . When the miss count reaches the size of dirty , the dirty map is promoted to read‑only, reducing future lock contention.

Key structs

type Map struct {
    mu      Mutex
    read    atomic.Value // readOnly
    dirty   map[interface{}]*entry
    misses  int
}

type readOnly struct {
    m       map[interface{}]*entry
    amended bool
}

type entry struct {
    p unsafe.Pointer // *interface{}
}

The p field can be nil , expunged (a sentinel indicating permanent deletion), or point to the actual value.

Load method

func (m *Map) Load(key interface{}) (value interface{}, ok bool) {
    read, _ := m.read.Load().(readOnly)
    e, ok := read.m[key]
    if !ok && read.amended {
        m.mu.Lock()
        read, _ = m.read.Load().(readOnly)
        e, ok = read.m[key]
        if !ok && read.amended {
            e, ok = m.dirty[key]
            m.missLocked()
        }
        m.mu.Unlock()
    }
    if !ok {
        return nil, false
    }
    return e.load()
}

Load first accesses the lock‑free read . If the key is absent but read.amended is true, it locks and checks dirty . The miss counter may trigger promotion of dirty to read .

Store method

func (m *Map) Store(key, value interface{}) {
    read, _ := m.read.Load().(readOnly)
    if e, ok := read.m[key]; ok && e.tryStore(&value) {
        return
    }
    m.mu.Lock()
    read, _ = m.read.Load().(readOnly)
    if e, ok := read.m[key]; ok {
        if e.unexpungeLocked() {
            m.dirty[key] = e
        }
        e.storeLocked(&value)
    } else if e, ok := m.dirty[key]; ok {
        e.storeLocked(&value)
    } else {
        if !read.amended {
            m.dirtyLocked()
            m.read.Store(readOnly{m: read.m, amended: true})
        }
        m.dirty[key] = newEntry(value)
    }
    m.mu.Unlock()
}

Store tries a lock‑free update; if that fails it locks and updates dirty . The double‑check pattern ensures correctness when dirty may have been promoted while acquiring the lock.

Delete (LoadAndDelete) method

func (m *Map) LoadAndDelete(key interface{}) (value interface{}, loaded bool) {
    read, _ := m.read.Load().(readOnly)
    e, ok := read.m[key]
    if !ok && read.amended {
        m.mu.Lock()
        read, _ = m.read.Load().(readOnly)
        e, ok = read.m[key]
        if !ok && read.amended {
            e, ok = m.dirty[key]
            delete(m.dirty, key)
            m.missLocked()
        }
        m.mu.Unlock()
    }
    if ok {
        return e.delete()
    }
    return nil, false
}

Delete removes the key from both read and dirty , returning the previous value if present.

Range method

func (m *Map) Range(f func(key, value interface{}) bool) {
    read, _ := m.read.Load().(readOnly)
    if read.amended {
        m.mu.Lock()
        read, _ = m.read.Load().(readOnly)
        if read.amended {
            read = readOnly{m: m.dirty}
            m.read.Store(read)
            m.dirty = nil
            m.misses = 0
        }
        m.mu.Unlock()
    }
    for k, e := range read.m {
        v, ok := e.load()
        if !ok {
            continue
        }
        if !f(k, v) {
            break
        }
    }
}

If all keys reside in read , Range is lock‑free. When dirty holds extra keys, the method locks once, copies dirty into read , and then iterates without further locking.

4. Summary of sync.Map

Optimized for read‑heavy workloads with few new key insertions (append‑only pattern).

Not suitable for workloads that frequently insert new keys because each insertion locks dirty and may trigger promotion, causing contention.

The internal expunged sentinel distinguishes permanently deleted entries from merely nil values, enabling lock‑free reads of existing keys.

5. orcaman/concurrent‑map

The open‑source orcaman/concurrent‑map implements sharded maps to reduce lock contention for workloads with frequent inserts and reads.

// SHARD_COUNT defines the number of shards
var SHARD_COUNT = 32

type ConcurrentMap []*ConcurrentMapShared

type ConcurrentMapShared struct {
    items map[string]interface{}
    sync.RWMutex // protects items
}

func New() ConcurrentMap {
    m := make(ConcurrentMap, SHARD_COUNT)
    for i := 0; i < SHARD_COUNT; i++ {
        m[i] = &ConcurrentMapShared{items: make(map[string]interface{})}
    }
    return m
}

Key operations:

Get : hash the key to a shard, RLock, read, RUnlock.

Set : hash, Lock, write, Unlock.

Remove : hash, Lock, delete, Unlock.

Count : iterate all shards, RLock each, sum lengths.

Upsert : hash, Lock, apply a callback to update or insert atomically.

This design is ideal for scenarios that repeatedly insert and read new values, unlike sync.Map which favors read‑dominant workloads.

6. Recommendations

For cache components that also need expiration, eviction policies, or GC optimizations, consider libraries such as freecache , gocache , fastcache , bigcache , or groupcache .

References include official Go blog posts, Medium articles, and the source code of sync.Map and orcaman/concurrent‑map .

PerformanceCacheConcurrencyGosync.Map
Tencent Cloud Developer
Written by

Tencent Cloud Developer

Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.