How Olric Handles Concurrency and Partition Rebalancing in Go

This article deep‑dives into Olric's concurrency architecture—sharding with local locks, goroutine‑driven pipelines, atomic operations, and its three‑step partition rebalancing process—showing how Go can power high‑throughput, consistent distributed caches.

Code Wrench
Code Wrench
Code Wrench
How Olric Handles Concurrency and Partition Rebalancing in Go

Concurrency Model: Go‑Level Practices

Olric achieves high performance by combining fine‑grained sharding with local locks, a goroutine‑based pipeline, and atomic operations that rely on distributed locks.

1. Shard + Local Lock

Each key space is split into multiple shards; every shard owns an independent sync.RWMutex and a map of entries.

type shard struct {
    mu    sync.RWMutex
    items map[string]*Entry
}

func (s *shard) Put(key string, value []byte) {
    s.mu.Lock()
    defer s.mu.Unlock()
    s.items[key] = &Entry{Key: []byte(key), Value: value}
}

Advantages:

Avoids a global lock, reducing contention.

Writes to different shards proceed without blocking each other.

Lower GC pressure and more efficient memory management.

2. Goroutine Management & Pipeline

Network communication and data migration are handled by a pipeline that batches jobs and runs them in separate goroutines.

type Pipeline struct {
    jobs chan func()
    wg   sync.WaitGroup
}

func (p *Pipeline) Submit(job func()) {
    p.wg.Add(1)
    p.jobs <- job
}

func (p *Pipeline) Run() {
    for job := range p.jobs {
        go func(j func()) {
            defer p.wg.Done()
            j()
        }(job)
    }
}

Features:

Batch sending reduces network overhead.

Parallel data migration speeds up transfers.

Controlled goroutine count prevents scheduler storms.

3. Atomic Operations & Distributed Lock

DMap provides atomic primitives (Incr, CAS, Lock) that combine local locking with RPC forwarding to ensure consistency across nodes.

func (dm *DMap) Incr(key string, delta int64) (int64, error) {
    partition := dm.getPartition(key)
    if dm.isOwner(partition) {
        return dm.localIncr(key, delta)
    }
    return dm.remoteIncr(partition.owner, key, delta)
}

Design philosophy:

Local node handles the operation directly for speed.

Remote nodes forward via RPC.

Replica synchronization guarantees data consistency.

Rebalancing Mechanism: Dynamic Nodes & Data Migration

When a node joins or leaves, the Partition Table must redistribute partitions. Olric’s rebalancing proceeds in three steps.

1. Compute Migration Plan

type MigrationPlan struct {
    PartitionID uint32
    FromNodeID  uint64
    ToNodeID    uint64
}

Uses Jump Hash to calculate new partition ownership.

Generates a minimal migration list.

2. Execute Data Migration

func (r *Rebalancer) migrate(plan MigrationPlan) error {
    entries := r.fetchEntries(plan.FromNodeID, plan.PartitionID)
    return r.sendEntries(plan.ToNodeID, plan.PartitionID, entries)
}

Characteristics:

Pipeline parallel transfer improves efficiency.

Batch sending lowers network latency.

Partition Table is updated after completion to keep consistency.

3. Consistency & Fault Handling

Writes are allowed during migration via a write‑barrier or version check.

Gossip protocol broadcasts migration‑completion events.

Node failures trigger automatic fail‑over to backup nodes.

Engineering Takeaways

Shard + local lock: reduces lock contention, boosts concurrency.

Pipeline batch processing: high throughput, low latency.

Goroutine control with channels: prevents scheduler storms.

Versioning & write barrier: ensures data consistency.

Layered decoupling: DMap, Partition Table, Storage Engine each have clear responsibilities.

Dynamic rebalancing: elastic node addition/removal keeps the system available.

Olric is not just a KV engine; it serves as a practical textbook for Go engineering and distributed system design. Reading its source and understanding its concurrency and rebalancing mechanisms equips you with core techniques for modern Go‑based distributed systems.
distributed-systemsrebalancing
Code Wrench
Written by

Code Wrench

Focuses on code debugging, performance optimization, and real-world engineering, sharing efficient development tips and pitfall guides. We break down technical challenges in a down-to-earth style, helping you craft handy tools so every line of code becomes a problem‑solving weapon. 🔧💻

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.