Unlocking TRPC: How Frontend Engineers Can Master RPC Architecture and Protobuf

This article walks frontend developers through the fundamentals of the TRPC framework, explaining why understanding RPC protocols, the layered architecture, plugin system, multi‑process handling, and Protobuf serialization is essential for building high‑performance, scalable services and improving debugging efficiency.

MoonWebTeam
MoonWebTeam
MoonWebTeam
Unlocking TRPC: How Frontend Engineers Can Master RPC Architecture and Protobuf

Introduction

Frontend developers often only use TRPC at a surface level. This guide starts from basic RPC concepts and gradually introduces TRPC’s architecture, source code analysis of the TRPC Node framework, and the core Protobuf protocol.

Why Learn TRPC

Most services in the author’s environment run on TRPC, requiring Node‑agent to host Node.js services on the TRPC framework. When backend teams expose only TRPC interfaces, frontend teams must understand TRPC to invoke them correctly.

Benefits

Improved development efficiency : Knowing TRPC removes uncertainty when making remote calls.

Faster problem定位 : Understanding the RPC flow helps locate failures quickly.

Learning design patterns : TRPC’s flexible, extensible design offers valuable patterns for future projects.

RPC Protocol Basics

RPC (Remote Procedure Call) lets a program invoke functions on a remote machine as if they were local. The client uses a Stub to encode requests, send them over the network, and decode responses.

// Local call
const result = add(5, 3);
// Remote call via stub
const result = client_stub.add(5, 3);

RPC vs HTTP

Aspect

RPC

HTTP

Performance

Low latency, high throughput, binary format

Higher latency, text format (JSON, XML)

Complexity

Handles serialization, networking, etc.

Simpler implementation

Data format

Binary (Protobuf, Thrift)

Text (JSON, XML)

TRPC Framework Architecture

TRPC follows a layered design: Application layer, Proxy (service) layer, Service‑governance layer, and Communication layer. The framework is built as a micro‑kernel with plugins.

Application Layer

Defines remote interfaces and converts business requests into RPC calls.

Proxy Layer

Client creates an ObjectProxy that holds a naming plugin and a list of adapters.

export class ObjectProxy {
  private naming: NamingPlugin;
  private adapters: AdapterProxy[] = [];
  constructor(public communicator: Communicator) {}
}

TRPC supports UnaryInvoke (request‑response) and various stream modes.

public async unaryInvoke(func: string, data: Uint8Array, opt = {}): Promise<UnaryContext> {}
public async clientStream(func: string, onData, opt = {}): Promise<void> {}
public async serverStream(func: string, opt = {}): Promise<void> {}
public async bidiStream(func: string, opt = {}): Promise<void> {}

Service‑Governance Layer

Manages service registration, discovery, load balancing, and monitoring via plugins (e.g., Polaris naming).

public async selectAdapterProxy(reqMessage) {
  const res = await this.naming.select(request, naming);
  const endpoint = new Endpoint(transport || protocol || 'tcp', host, port);
  const adapter = new AdapterProxy(this, endpoint, transceiverConstructor);
  return adapter;
}

Communication Layer

Handles data transmission and protocol encoding. Built‑in support for TCP, UDP, and HTTP/2. Default transport uses a Protobuf‑based TRPC frame.

export class TCPTransceiver extends Transceiver {
  private socket: net.Socket;
  reconnect() { /* create socket, set up data handler */ }
  sendRequest(protoMessage) { const buffer = this.protocol.compose(protoMessage); this.socket.write(buffer); }
}

Frames consist of a fixed 16‑byte header, followed by a head and body. Encoding and decoding functions ensure integrity.

export const encode = (frame) => { /* build Buffer with header, head, body */ };
export const decode = (buf) => { /* validate header, extract head and body */ };

Plugin System

TRPC plugins bridge core logic with external services (naming, protocol, tracing, metrics, authentication, transceiver). The old version used a Communicator with hard‑coded plugin calls; the new version adopts a Koa‑style middleware chain, allowing flexible ordering and context sharing.

const client = new Communicator();
client.use('protocol', TrpcClient);
client.use('naming', PolarisNamingPlugin);
// Usage
const authInfo = await this.communicator.authentication?.getAuthInfo(this.objName);
const res = await this.naming.select(request, naming);

Multi‑Process Model

Node.js is single‑threaded; TRPC uses node‑agent based on the Cluster module to spawn workers and share listening sockets. The master creates a real socket, while workers receive a dummy handle and request connections via IPC. The default scheduling strategy is Round‑Robin.

cluster.fork();
Server.prototype.listen = function(...args) { listenInCluster(); };
function listenInCluster() { if (cluster.isPrimary) { /* real handle */ } else { /* dummy handle, send queryServer */ } }

When a client connects, the master distributes the connection to an idle worker.

RoundRobinHandle.prototype.distribute = function(err, handle) {
  const [workerId, worker] = this.free.shift();
  this.handoff(worker);
};
RoundRobinHandle.prototype.handoff = function(worker) {
  const message = { act: 'newconn', key: this.key };
  sendHelper(worker.process, message, handle, (reply) => { if (!reply.accepted) this.distribute(0, handle); });
};

Protobuf Protocol

TRPC uses Google Protocol Buffers for serialization. A .proto file defines messages, enums, and services. Fields are identified by numbers, not names, making the binary format compact.

syntax = "proto3";
package trpc.mobileassist.pngmoleserver;
message Test1Req {
  string msg = 1 [(validate.rules).string = {min_len: 1, max_len: 256, tsecstr: true}];
  common.BusinessRequest req = 2;
  repeated int64 nums = 3;
  oneof test { Test1 test1 = 4; Test2 test2 = 5; }
  EnumTest1 enum_test1 = 6;
  enum EnumTest2 { KEY1 = 0; KEY2 = 1; }
  EnumTest2 enum_test2 = 7;
}

Encoding uses a field key ((field_number << 3) | wire_type) followed by a value. Integer values use Variant (varint) encoding; signed integers can use ZigZag to map negatives to small unsigned numbers before varint. Length‑delimited fields (strings, bytes, nested messages) prefix the payload with its varint length.

// Example encoding of a simple message
const payload = { test1: "test", test2: 2 };
const message = Example.create(payload);
const buffer = Example.encode(message).finish();
console.log(buffer); // <em>Encoded buffer: <Uint8Array ...></em>

Conclusion

TRPC combines RPC principles, a plugin‑driven micro‑kernel, multi‑process scaling, and efficient Protobuf serialization to provide a flexible, high‑performance framework for both frontend and backend developers. Understanding its architecture, plugin system, and serialization format enables developers to extend, debug, and optimize services effectively.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

microservicesRPCNode.jsProtobuftRPC
MoonWebTeam
Written by

MoonWebTeam

Official account of MoonWebTeam. All members are former front‑end engineers from Tencent, and the account shares valuable team tech insights, reflections, and other information.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.