Backend Development 9 min read

Designing a High‑Performance In‑Memory Cache in Go: From Simple Map to Sharded Locking and BigCache

This article explains how to build an efficient in‑memory cache in Go by starting with a basic map, adding read‑write locks for concurrency, reducing lock contention through sharding, minimizing GC overhead with a ring buffer, and finally using the high‑performance BigCache library.

IT Services Circle
IT Services Circle
IT Services Circle
Designing a High‑Performance In‑Memory Cache in Go: From Simple Map to Sharded Locking and BigCache

From a map

When data that rarely changes but is frequently read needs to be cached, the simplest solution is to store it in an in‑memory map (dictionary) using key‑value pairs. Reading is done with v = m[key] , writing with m[key] = v .

Single‑threaded read/write

In a single‑threaded scenario this works, but concurrent reads and writes cause race conditions, so a read‑write lock must be added. The lock‑protected versions look like:

RLock()
v = m[key]
RUnLock()
Lock()
m[key] = v
UnLock()

Reducing lock contention

When QPS is high, a single lock becomes a bottleneck. By sharding the map into multiple segments, each with its own lock, only operations on the same shard contend for the same lock.

GC impact

In Go, maps that contain pointers trigger garbage‑collector scans. To avoid this, keys are hashed to integers and values are stored in a large byte buffer (ring buffer); the map holds only the index of the value in the buffer.

Ring buffer design

The ring buffer stores serialized key‑value data with a fixed‑size header and length field, allowing the system to locate and read a complete entry efficiently. Index 0 is reserved to indicate a missing entry.

Using bigcache

bigcache implements the above ideas. Configuration example:

package main

import (
    "fmt"
    "github.com/allegro/bigcache/v3"
)

func main() {
    cacheConfig := bigcache.Config{
        Shards: 1024, // number of shards for concurrency
    }

    cache, _ := bigcache.NewBigCache(cacheConfig)

    key := "welcome"
    value := []byte("xiaobaidebug")
    cache.Set(key, value)

    entry, _ := cache.Get(key)
    fmt.Printf("Entry: %s\n", entry)
}

bigcache achieves high performance but has drawbacks: it stores data as byte arrays requiring serialization, and its eviction policy is FIFO, lacking LRU or LFU.

Summary

For low‑frequency read/write, a simple locked map is sufficient.

For high‑frequency access, use sharded locks to reduce contention.

In Go, avoid pointer‑containing maps to reduce GC overhead by storing values in a ring buffer and keeping only integer indexes in the map.

CacheConcurrencyShardingGomemoryBigCache
IT Services Circle
Written by

IT Services Circle

Delivering cutting-edge internet insights and practical learning resources. We're a passionate and principled IT media platform.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.