Unlocking Olric’s High‑Performance Network Protocol and RPC Mechanism
This article dives deep into Olric’s network communication architecture and RPC mechanism, explaining its layered transport design, request/response structures, pipeline and batch processing, client‑to‑cluster interactions, data migration and rebalancing, and provides Go code examples illustrating high‑throughput, safe distributed operations.
Overview of Olric Network Communication
Olric’s network layer handles three main responsibilities: synchronizing cluster node state via Gossip and partition‑table updates, forwarding distributed map (DMap) requests such as Put/Get/Delete, and managing data migration and rebalancing.
Cluster state synchronization (Gossip + Partition Table updates)
DMap request forwarding (Put/Get/Delete)
Data migration and rebalancing
The core source‑code layout reflects a clear separation of concerns:
internal/transport/ → network transport and RPC
internal/cluster/ → node state management & Gossip
internal/rebalancer/ → data migration pipeline
pkg/client/ → Go client request wrapperDesign Philosophy
Logical clarity: Transport focuses on networking, Cluster manages node state.
High performance: Pipeline + batch + asynchronous processing.
Extensibility: Client and server share a common communication protocol.
RPC Request and Response Mechanism
Olric’s RPC design follows Go idioms, supporting both synchronous and asynchronous calls, batch transmission with pipeline, and safe high‑concurrency handling.
1. RPC Request Structure
type Request struct {
Cmd string
DMap string
Key []byte
Value []byte
TTL int64
Metadata map[string]string
}2. RPC Response Structure
type Response struct {
Status int
Value []byte
Version uint64
Error string
}Pipeline and Batch Processing
To achieve high throughput, the Transport module batches requests and processes them in parallel using a pipeline pattern.
func (t *Transport) sendBatch(requests []*Request) []*Response {
var wg sync.WaitGroup
responses := make([]*Response, len(requests))
for i, req := range requests {
wg.Add(1)
go func(i int, r *Request) {
defer wg.Done()
responses[i] = t.send(r)
}(i, req)
}
wg.Wait()
return responses
}Advantages of this approach include automatic parallelism for high‑concurrency requests, reduced network latency, and lightweight goroutine‑channel control.
Client‑to‑Cluster Communication
A typical client usage example looks like this:
client, _ := olric.NewClient(olric.Config{
Addrs: []string{"127.0.0.1:3320"},
})
val, err := client.DMap("example").Get("key1")
if err != nil {
log.Fatal(err)
}
fmt.Println(string(val))The workflow for a request is:
The client hashes the key to determine the target node.
If the target node is local, the DMap is accessed directly.
Otherwise the client initiates an RPC via the Transport layer.
The remote node executes the storage engine operation and returns a Response.
Key design highlights are a simple client API that hides distribution complexity and automatic detection of node joins/leaves through partition‑table updates.
Data Migration and Rebalancing Communication
When nodes join or leave, partitions must be migrated. Olric uses a combination of pipeline batch sending, asynchronous RPC confirmation, and Gossip broadcasts to keep the cluster consistent.
Example Migration RPC
func (t *Transport) migratePartition(partitionID uint32, toNode string, entries []*Entry) error {
req := &Request{
Cmd: "MIGRATE",
Key: []byte(fmt.Sprintf("%d", partitionID)),
Value: serializeEntries(entries),
}
resp := t.send(req, toNode)
return resp.Error
}Engineering considerations include asynchronous migration with pipeline to avoid blocking, write barriers and version numbers to guarantee consistency during transfer, and Gossip notifications to update the partition table across the cluster.
Summary of Olric’s Network Design
Clear layering: Transport handles networking, Cluster manages node state.
High‑concurrency design: pipeline + goroutine + batch processing.
Distributed consistency: version numbers, write barriers, and Gossip.
Extensible protocol: Metadata field allows future extensions.
By studying the source code, readers can learn how to implement high‑performance distributed RPC in Go, decouple client logic from cluster logic, and balance throughput with consistency.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Code Wrench
Focuses on code debugging, performance optimization, and real-world engineering, sharing efficient development tips and pitfall guides. We break down technical challenges in a down-to-earth style, helping you craft handy tools so every line of code becomes a problem‑solving weapon. 🔧💻
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
