Mastering Cache Eviction in Go: When and How to Use LRU

This article explains why naive cache eviction fails, why LRU is the go‑to strategy for many Go projects, and provides a production‑ready LRU implementation with detailed code, lock‑granularity tips, key design considerations, and scenarios where LRU is not suitable.

Code Wrench
Code Wrench
Code Wrench
Mastering Cache Eviction in Go: When and How to Use LRU

1. Common Cache Eviction Methods: Simple but Brutal

Many projects simply delete an arbitrary entry when the cache reaches its capacity. The code is fast and cheap but ignores access patterns, making the hit rate uncontrollable and rendering the cache ineffective for hot data.

if len(cache) >= maxSize {
    for k := range cache {
        delete(cache, k)
        break
    }
}

2. Why Most Engineers End Up Using LRU

LRU (Least Recently Used) is not a complex algorithm, yet it matches a realistic business assumption: data accessed recently is likely to be accessed again soon.

Recently accessed data often gets accessed again shortly thereafter.

This assumption holds for user session data, hot configuration, and frequently accessed reference data, making LRU a practical eviction policy.

3. LRU Implementation Overview

An engineering‑ready LRU cache in Go consists of two parts:

Hash map (map)

O(1) lookup

key → pointer to list node

Doubly linked list (list)

Maintains access order

Head holds most recently used items

Tail holds least recently used items

The map solves the speed problem, while the list determines which entry to evict.

Data Structure Definition

// entry wraps key and value stored in the list
// Used to delete the map entry when the list node is evicted
type entry struct {
    key   string
    value any
}

type LRUCache struct {
    capacity int
    cache    map[string]*list.Element
    list     *list.List
    mu       sync.Mutex
}

Initialization

// NewLRUCache creates a cache with the given capacity
func NewLRUCache(capacity int) *LRUCache {
    if capacity <= 0 {
        capacity = 1 // guarantee at least one slot
    }
    return &LRUCache{
        capacity: capacity,
        cache:    make(map[string]*list.Element),
        list:     list.New(),
    }
}

Get: Access Updates Position

// Get returns the value and moves the entry to the front (most recent)
func (l *LRUCache) Get(key string) (any, bool) {
    l.mu.Lock()
    defer l.mu.Unlock()

    if elem, ok := l.cache[key]; ok {
        l.list.MoveToFront(elem)
        return elem.Value.(*entry).value, true
    }
    return nil, false
}

The crucial line l.list.MoveToFront(elem) implements the core LRU semantics: any accessed item becomes "recently used".

Put: Write and Eviction Logic

// Put inserts or updates a value and evicts if necessary
func (l *LRUCache) Put(key string, value any) {
    l.mu.Lock()
    defer l.mu.Unlock()

    // 1. Update existing key and move to front
    if elem, ok := l.cache[key]; ok {
        elem.Value.(*entry).value = value
        l.list.MoveToFront(elem)
        return
    }

    // 2. Evict if capacity reached
    if l.list.Len() >= l.capacity {
        l.removeOldest()
    }

    // 3. Insert new entry at the front
    elem := l.list.PushFront(&entry{key: key, value: value})
    l.cache[key] = elem
}

Evicting the Oldest Entry

// removeOldest removes the tail element (least recently used)
func (l *LRUCache) removeOldest() {
    elem := l.list.Back()
    if elem == nil {
        return
    }
    l.list.Remove(elem)
    ent := elem.Value.(*entry)
    delete(l.cache, ent.key) // remove map entry
}

This deterministic rule ensures that eviction is based on actual usage rather than arbitrary guesses.

4. Common Pitfalls in Go Projects

1️⃣ Lock Granularity

Both Get and Put modify the list, so they are not read‑only operations. In high‑QPS scenarios the lock can become a hotspot. Solutions include sharding the cache or separating read‑write paths.

2️⃣ Key Design Matters

Even a correct LRU can appear ineffective if keys are too fine‑grained, causing hits to be scattered and the cache to expire naturally.

3️⃣ LRU Is Not a Universal Solution

LRU performs poorly for batch‑scan workloads, one‑time data loads, or sequential traversal of large datasets, where other strategies (e.g., FIFO, LFU, or no cache) may be more appropriate.

5. Why Cache Eviction Is a Typical Engineering Algorithm Problem

Implementing LRU is straightforward, but the surrounding decisions are highly engineering‑focused: cache size, eviction policy, tolerance for short‑term pollution, and whether to track hit rates. These choices depend on a clear understanding of the system's access patterns.

Do you know your system's access pattern?

Conclusion

In Go projects, cache eviction is often underestimated. Performance issues usually stem from using the wrong eviction strategy rather than from missing a cache altogether. LRU's value lies in turning "access behavior" into a controllable rule.

For a deeper dive, see the related article "Go High‑Concurrency Local Cache: TTL + LRU + Sharded + Singleflight + Async Refresh" (https://mp.weixin.qq.com/s?__biz=MzIyNTc4NzU4Mg==∣=2247484262&idx=1&sn=623cdc2d7439bdde52b66130d1dd0210&scene=21#wechat_redirect).

PerformanceCacheGoLRUeviction
Code Wrench
Written by

Code Wrench

Focuses on code debugging, performance optimization, and real-world engineering, sharing efficient development tips and pitfall guides. We break down technical challenges in a down-to-earth style, helping you craft handy tools so every line of code becomes a problem‑solving weapon. 🔧💻

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.