Exploring tRPC-Go: Inside a High‑Performance Pluggable RPC Framework
This article walks through the inner workings of the open‑source tRPC‑Go RPC framework, detailing the client invoke lifecycle, selector reporting, codec serialization, transport handling, server processing, plugin architecture, streaming support, and performance‑oriented design choices for Go backend developers.
tRPC‑Go is a Chinese open‑source RPC framework for Go that emphasizes plug‑in extensibility and high performance. The framework is built around a clear separation of concerns: client, server, filter chain, selector, codec, transport, and a flexible plug‑in system.
Client Invocation Flow
The entry point for a request is client.Invoke. Its implementation performs four main steps:
Prepare Message : Ensure a unique Message object exists in the context to carry metadata, method name, headers, and payload.
Parse Options : Merge global, client‑instance, and per‑call options ( opt...) into a final configuration.
Update Message : Fill the Message with resolved configuration such as target service, serialization, and compression settings.
Execute Filter Chain : Run a series of Filter objects (service discovery, load balancing, circuit breaking, monitoring, etc.) before finally invoking the network call via callFunc.
func (c *client) Invoke(ctx context.Context, reqBody interface{}, rspBody interface{}, opt ...Option) (err error) {
ctx, msg := codec.EnsureMessage(ctx)
span, end, ctx := rpcz.NewSpanContext(ctx, "client")
opts, err := c.getOptions(msg, opt...)
defer func() { end.End() }()
if err != nil { return err }
c.updateMsg(msg, opts)
filters := c.fixFilters(opts)
span.SetAttribute(rpcz.TRPCAttributeFilterNames, opts.FilterNames)
return filters.Filter(contextWithOptions(ctx, opts), reqBody, rspBody, callFunc)
}Selector Filter and Reporting
Within the filter chain, selectorFilter selects a healthy backend node using the Selector interface, which abstracts service discovery and load balancing. After the call, the filter reports the outcome via Selector.Report, allowing dynamic weight adjustments or fault isolation.
type Selector interface {
// Select gets a backend node by service name.
Select(serviceName string, opt ...Option) (*registry.Node, error)
// Report reports request status.
Report(node *registry.Node, cost time.Duration, err error) error
}The reporting logic distinguishes framework errors (e.g., connection failure, timeout) from business errors, sending appropriate metrics to the selector for adaptive load balancing.
Codec Design
The Codec interface separates serialization from transport framing. Implementations encode a request body into a binary buffer and decode responses back into Go structs.
type Codec interface {
// Encode packs the body into a binary buffer.
Encode(message Msg, body []byte) (buffer []byte, err error)
// Decode unpacks the body from a binary buffer.
Decode(message Msg, buffer []byte) (body []byte, err error)
}Serialization (e.g., Protobuf, JSON) and compression are performed independently in serializeAndCompress, giving developers the freedom to trade space for latency.
func serializeAndCompress(ctx context.Context, msg codec.Msg, reqBody interface{}, opts *Options) ([]byte, error) {
// Marshal
serializationType := msg.SerializationType()
if icodec.IsValidSerializationType(opts.CurrentSerializationType) {
serializationType = opts.CurrentSerializationType
}
reqBodyBuf, err := codec.Marshal(serializationType, reqBody)
if err != nil { return nil, errs.NewFrameError(errs.RetClientEncodeFail, "client codec Marshal: "+err.Error()) }
// Compress
compressType := msg.CompressType()
if icodec.IsValidCompressType(opts.CurrentCompressType) {
compressType = opts.CurrentCompressType
}
if icodec.IsValidCompressType(compressType) && compressType != codec.CompressTypeNoop {
reqBodyBuf, err = codec.Compress(compressType, reqBodyBuf)
if err != nil { return nil, errs.NewFrameError(errs.RetClientEncodeFail, "client codec Compress: "+err.Error()) }
}
return reqBodyBuf, nil
}Transport Layer
The transport abstracts the underlying network protocol (TCP, UDP, Unix socket). Its RoundTrip method sends the encoded request and returns the raw response.
type clientTransport struct { opts *ClientTransportOptions }
func (c *clientTransport) RoundTrip(ctx context.Context, req []byte, roundTripOpts ...RoundTripOption) (rsp []byte, err error) {
if opts.EnableMultiplexed { return c.multiplexed(ctx, req, opts) }
switch opts.Network {
case "tcp", "tcp4", "tcp6", "unix":
return c.tcpRoundTrip(ctx, req, opts)
case "udp", "udp4", "udp6":
return c.udpRoundTrip(ctx, req, opts)
default:
return nil, errs.NewFrameError(errs.RetClientConnectFail, fmt.Sprintf("client transport: network %s not support", opts.Network))
}
}Server Processing
On the server side, the flow mirrors the client: transport receives bytes, Codec.Decode extracts the header and body, the filter chain processes authentication, metrics, etc., the registered Service handler is invoked, and the response is encoded and sent back.
type Server struct { services map[string]Service }
// Simplified handling steps:
// 1. Transport receives data
// 2. Codec.Decode parses request
// 3. Filter chain processes request
// 4. Service handler executes business logic
// 5. Codec.Encode packs response
// 6. Transport sends responsePlug‑in System
tRPC‑Go’s plug‑in architecture is two‑tier (type + name) and supports strong ( Depender) and weak ( FlexDepender) dependencies. Initialization respects dependency order: strong → weak → independent.
type Depender interface { Init() error }
type FlexDepender interface { InitAfter(dep Depender) error }Plug‑ins are categorized into independent plugins (codec, compress, serialization, cache), service‑governance plugins (selector/discovery, config, metrics, log, rpcz), and storage plugins (relational DB, NoSQL, MQ, cache). An example FixedNodeSelector plug‑in demonstrates how to implement the selector.Selector interface and register it.
type FixedNodeSelector struct { Nodes map[string]string `yaml:"nodes"` }
func (p *FixedNodeSelector) Type() string { return "selector" }
func (p *FixedNodeSelector) Setup(name string, dec plugin.Decoder) error {
if err := dec.Decode(p); err != nil { return err }
selector.Register(name, p)
return nil
}
func (p *FixedNodeSelector) Select(serviceName string, opts ...selector.Option) (*registry.Node, error) {
if node, ok := p.Nodes[serviceName]; ok {
return ®istry.Node{Address: node, ServiceName: serviceName}, nil
}
return nil, fmt.Errorf("unknown service %s", serviceName)
}
func (p *FixedNodeSelector) Report(*registry.Node, time.Duration, error) error { return nil }
func init() { plugin.Register("fixed_selector", &FixedNodeSelector{Nodes: make(map[string]string)}) }Streaming Support
tRPC‑Go provides client streaming, server streaming, and bidirectional streaming. Flow control uses a window‑size mechanism; if InitWindowSize is zero, flow control is disabled.
func (s *serverStream) setSendControl(msg codec.Msg) (uint32, error) {
initMeta, ok := msg.StreamFrame().(*trpcpb.TrpcStreamInitMeta)
if !ok { return 0, errors.New(streamFrameInvalid) }
if initMeta.InitWindowSize == 0 { s.rControl, s.sControl = nil, nil; return initMeta.InitWindowSize, nil }
s.sControl = newSendControl(initMeta.InitWindowSize, s.done)
return initMeta.InitWindowSize, nil
}Architectural Highlights
Layered Decoupling : Client/Server, Filter, Selector, Codec, Transport each have clear interfaces, enabling independent replacement.
Plug‑in Extensibility : All cross‑cutting concerns (service discovery, logging, metrics, configuration) are plug‑ins, allowing unlimited customization.
Performance Awareness : Connection pooling, multiplexed transports, configurable serialization/compression, and fine‑grained timeout controls reduce latency and increase throughput.
Observability : Built‑in RPCZ tracing records stages such as Marshal, Compress, Encode, Decode for real‑time metrics.
Flexible Configuration : Three‑level priority (dynamic > instance > global) and hierarchical timeout settings adapt to diverse deployment scenarios.
By studying tRPC‑Go’s source, developers can learn advanced Go interface design, dependency inversion, error handling, concurrency control, and how to build a highly modular, observable, and performant microservice framework.
Code Wrench
Focuses on code debugging, performance optimization, and real-world engineering, sharing efficient development tips and pitfall guides. We break down technical challenges in a down-to-earth style, helping you craft handy tools so every line of code becomes a problem‑solving weapon. 🔧💻
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
