Cloud Native 9 min read

Integrating PolarisMesh with gRPC-Go for Full-Service Governance

This guide explains how Tencent's open‑source PolarisMesh service mesh can be combined with the Go implementation of gRPC to add service discovery, health checking, dynamic routing, circuit breaking, graceful shutdown, and traffic limiting, providing step‑by‑step code examples and integration architecture.

Tencent Cloud Middleware
Tencent Cloud Middleware
Tencent Cloud Middleware
Integrating PolarisMesh with gRPC-Go for Full-Service Governance

Overview

PolarisMesh is an open‑source service discovery and governance platform that provides service addressing, traffic scheduling, fault tolerance, and access control. It can be used unchanged in both Kubernetes and virtual‑machine environments.

Why gRPC‑Go needs additional governance

gRPC‑Go offers high‑performance binary RPC but does not include built‑in locality routing, circuit breaking, graceful shutdown, or global rate limiting, which are required for robust microservice architectures.

Integration Architecture

PolarisMesh extends gRPC‑Go through three plugin types:

Resolver plugin for service discovery via PolarisMesh.

Balancer plugin for dynamic routing and circuit breaking.

ServerInterceptor plugin for service registration, health checking, and graceful shutdown.

Architecture diagram:

PolarisMesh integration architecture
PolarisMesh integration architecture

Client‑side usage

Import the PolarisMesh plugin and dial using the polaris:// scheme.

import (
    "context"
    "log"

    "google.golang.org/grpc"
    polaris "github.com/polarismesh/grpc-go-polaris"
)

ctx, cancel := context.WithCancel(context.Background())
defer cancel()

conn, err := grpc.DialContext(
    ctx,
    "polaris://EchoServerGRPC/",
    grpc.WithInsecure(),
    grpc.WithDefaultServiceConfig(polaris.LoadBalanceConfig),
)
if err != nil {
    log.Fatal(err)
}
defer conn.Close()

// Normal client call
echoClient := pb.NewEchoServerClient(conn)
echoClient.Echo(ctx, &pb.EchoRequest{Value: value})

Server‑side usage

Register the gRPC server with PolarisMesh so that the service becomes discoverable and participates in governance features.

import (
    "fmt"
    "log"
    "net"
    "os"
    "os/signal"

    "google.golang.org/grpc"
    polaris "github.com/polarismesh/grpc-go-polaris"
)

srv := grpc.NewServer()
pb.RegisterEchoServerServer(srv, &EchoService{})
address := fmt.Sprintf("0.0.0.0:%d", listenPort)
listen, err := net.Listen("tcp", address)
if err != nil {
    log.Fatalf("Failed to listen %s: %v", address, err)
}

// Register with PolarisMesh
pSrv, err := polaris.Register(srv, listen, polaris.WithServerApplication("EchoServerGRPC"))
if err != nil {
    log.Fatal(err)
}

// Graceful shutdown handling
go func() {
    c := make(chan os.Signal, 1)
    signal.Notify(c)
    s := <-c
    log.Printf("receive quit signal: %v", s)
    pSrv.Deregister()
    srv.GracefulStop()
}()

if err = srv.Serve(listen); err != nil {
    log.Printf("serve error: %v", err)
}

Quick‑start example

Source code: https://github.com/polarismesh/grpc-go-polaris/tree/main/examples/quickstart

Other framework integrations

grpc-go: https://github.com/polarismesh/grpc-go-polaris

dubbo-go: https://github.com/apache/dubbo-go/tree/master/registry/polaris

go-zero: https://github.com/zeromicro/zero-contrib/tree/main/zrpc/registry/polaris

GoFrame: https://github.com/gogf/polaris

grpc-java-polaris: https://github.com/polarismesh/grpc-java-polaris

spring-cloud-tencent: https://github.com/Tencent/spring-cloud-tencent

service discoveryGoservice meshPolarisMeshgRPC-Go
Tencent Cloud Middleware
Written by

Tencent Cloud Middleware

Official account of Tencent Cloud Middleware. Focuses on microservices, messaging middleware and other cloud‑native technology trends, publishing product updates, case studies, and technical insights. Regularly hosts tech salons to share effective solutions.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.