Backend Development 21 min read

Design and Implementation of a Multi‑Level Cache Component Library in Go

This article explains the motivation, design principles, class diagram, and core Go code for a multi‑level cache library that supports in‑memory and distributed caches (Redis, Memcached) using adapter, builder, and responsibility‑chain patterns, and discusses cache‑database consistency strategies.

Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Design and Implementation of a Multi‑Level Cache Component Library in Go

Writing Background

Early on I wanted to write an article about a cache component library because I felt the encapsulation and usage of cache components in my projects were still lacking; recent reflections led to several design ideas that I recorded here.

Why Cache Is Needed

Cache is unnecessary when request QPS is low, internal logic has no intensive I/O or hot data, or data changes extremely frequently. It becomes useful in the following cases:

Improve application performance by storing frequently used data in memory (in‑memory or distributed cache), avoiding slow storage reads and reducing response time.

Reduce database load: frequent DB access puts pressure on the database; caching cuts down query frequency.

Enhance user experience through faster responses and less waiting time.

Requirement Background

Based on my usual usage patterns, the requirements are simple:

Support multiple cache types (distributed caches such as Redis, Memcached, in‑memory caches) with an extensible design.

Allow combination of caches, e.g., check in‑memory first, then fall back to distributed cache, and finally write‑back to both.

Support various data structures via generics because different callers have different data shapes.

On cache miss, automatically query the database or downstream service and write the result back to the cache (the pseudo‑code below illustrates the typical flow).

先从缓存中查询
if 缓存命中{
    return
}

if 缓存miss{
    查询数据库/下游
    if 成功{
        写回缓存
        return
    }
}

Additional requirements include monitoring, automatic reconnection for distributed components, and optional logging/metrics support.

Class Diagram

A simple class diagram (core classes only) shows the use of the Adapter pattern and the Chain of Responsibility pattern.

Key descriptions:

Adaptor defines the adapter interface; concrete subclasses (e.g., RedisAdaptor, MemoryAdaptor) implement three basic methods (set, get, delete).

Cache interface ICache declares external methods Get, Set, Del, GetAndSet, etc.; MultiCache[T any] implements a multi‑level cache using the Chain of Responsibility.

Other classes are auxiliary and omitted for brevity.

Core Code

Cache Adapter

Adapter Interface (IAdaptor)

import (
	"context"
	"time"
)

// IAdaptor interface
type IAdaptor interface {
	Set(ctx context.Context, params map[string][]byte, expire time.Duration) error
	Get(ctx context.Context, k []string) (map[string][]byte, error)
	Del(ctx context.Context, k []string) error
}

The interface is intentionally minimal with only three methods.

Distributed Cache Adapter Implementation (Redis)

Two parts: Redis client and IAdaptor implementation.

Client

The client handles connection, monitoring, and reconnection.

// Config configuration (builder pattern)
type Config struct {
	name          string
	addr          string
	password      string
	db            int
	poolSize      int
	poolTimeout   time.Duration
	readTimeout   time.Duration
	writeTimeout  time.Duration
}

func (c Config) build() (*redis.Options, error) { /* omitted */ }

type Client struct {
	redisClient redis.UniversalClient
	conf       *Config
	ctx        context.Context
	sm         sync.RWMutex
}

func NewRedisClient(conf *Config) (*Client, error) { /* omitted */ }

func (c *Client) monitoring() { /* ping every 30s and reconnect */ }

Redis Adapter

type Cache struct { client *Client }

func NewRedisAdaptor(client *Client) client.IAdaptor { return &Cache{client: client} }

func (r *Cache) Set(ctx context.Context, params map[string][]byte, expire time.Duration) error { /* pipelined set */ }
func (r *Cache) Del(ctx context.Context, k []string) error { return r.client.GetRedisClient().Del(ctx, k...).Err() }
func (r *Cache) Get(ctx context.Context, k []string) (map[string][]byte, error) { /* pipelined get */ }

Cache Orchestration Manager

The ICache interface defines the orchestration methods, especially GetAndSet and GetAndSetSingle for cache‑miss handling.

type ICache[T any] interface {
	Set(ctx context.Context, params map[string]T) error
	Get(ctx context.Context, keys []string) (map[string]T, error)
	GetAndSet(ctx context.Context, keys []string, f func([]string) (map[string]T, error)) (map[string]T, error)
	GetAndSetSingle(ctx context.Context, k string, f func(string) (T, bool, error)) (T, bool, error)
	Del(ctx context.Context, keys []string) error
}

MultiCache[T any] implements a multi‑level cache (memory → Redis → DB) using a slice of IAdaptor handlers.

type MultiCache[T any] struct {
	handlers []client.IAdaptor
	opts     *MultiCacheOptions
	sf       singleflight.Group
}

func NewMultiCache[T any](opts *MultiCacheOptions, handlers ...client.IAdaptor) ICache[T] { /* omitted */ }

// Set writes to all handlers
func (c *MultiCache[T]) Set(ctx context.Context, params map[string]T) error { /* omitted */ }

// Get performs multi‑level lookup and writes back to higher‑level caches when lower‑level hits.
func (c *MultiCache[T]) Get(ctx context.Context, keys []string) (map[string]T, error) { /* omitted */ }

// GetAndSetSingle uses singleflight to coalesce concurrent miss requests.
func (c *MultiCache[T]) GetAndSetSingle(ctx context.Context, k string, f func(string) (T, bool, error)) (T, bool, error) { /* omitted */ }

Usage Example

func getRedisAdaptor() client.IAdaptor { /* create Redis adaptor */ }
func getMemAdaptor() client.IAdaptor { /* create memory adaptor */ }

func main() {
	type Person struct { Name string; Age int }
	cache := goCache.NewMultiCache[*Person](&goCache.MultiCacheOptions{Base: goCache.Base{Prefix: "demo"}, EnableLog: false, WriteNil: false, Expire: time.Minute * 10}, getMemAdaptor(), getRedisAdaptor())

	// Set
	m := map[string]*Person{"xxxxx_test1": {Name: "李四", Age: 20}}
	cache.Set(context.TODO(), m)

	// Get
	kv, _ := cache.Get(context.TODO(), []string{"12344pyc-test1"})
	b, _ := sonic.Marshal(kv)
	fmt.Println(string(b))

	// Delete
	cache.Del(context.TODO(), []string{"12344pyc-test1"})
}

Cache‑Database Inconsistency Issues

Common scenarios that cause inconsistency are discussed, along with four typical update‑delete orders and their worst‑case analyses.

Update cache first, then DB – may leave stale cache if DB fails.

Update DB first, then cache – cache may stay stale if cache update fails.

Delete cache first, then update DB – high QPS can cause race where stale data is written back.

Update DB first, then delete cache – most common; solutions include low‑QPS transactions, retry mechanisms, or asynchronous MQ‑based deletion.

The article recommends the fourth approach with retries and expiration times, discouraging complex asynchronous MQ pipelines.

Summary

The article presents a simple, extensible Redis‑based cache component using builder, adapter, and chain‑of‑responsibility patterns.

For cache‑DB consistency, the preferred strategy is to update the database first and then delete the cache, optionally adding retries and expiration to ensure eventual consistency.

design patternsCachebackend developmentRedisgodistributed cacheAdapter Pattern
Rare Earth Juejin Tech Community
Written by

Rare Earth Juejin Tech Community

Juejin, a tech community that helps developers grow.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.