Cloud Native 19 min read

How to Master Microservice Routing, Gray Release, and Multi‑Active Disaster Recovery

This article presents a comprehensive guide to microservice architecture evolution, covering routing, multi‑active deployment, gray releases, canary/rolling/blue‑green strategies, traffic shading, rate limiting, and practical implementation steps using cloud‑native gateways, service meshes, and Kubernetes labeling.

Tencent Cloud Middleware
Tencent Cloud Middleware
Tencent Cloud Middleware
How to Master Microservice Routing, Gray Release, and Multi‑Active Disaster Recovery

Microservice Architecture Evolution

Traditional monolithic applications suffer from tight coupling and difficulty scaling. Service‑Oriented Architecture (SOA) introduced loose coupling via an ESB but required planned downtime for scaling. Modern cloud‑native microservices add DevOps automation (CI/CD), elastic auto‑scaling, and high availability through containers or serverless runtimes.

Testing‑Phase Traffic Routing

When multiple teams develop concurrently, deploying the full service suite for each integration test is wasteful. The goal is to isolate only the changed services while reusing baseline instances.

Key Techniques

Instance labeling : Add metadata to Kubernetes workloads or service‑registry entries so that each instance can be identified by environment, feature flag, or SET (Service Execution Territory). Example for Kubernetes:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: order-service
  annotations:
    env: test
    set: setA
spec:
  ...

Traffic shading (coloring) : The ingress gateway tags incoming requests (e.g., by user ID, header, or JWT claim) with a color that matches the target environment.

Gateway‑to‑backend routing : The gateway uses the request color and instance labels to forward traffic to the appropriate test instance via label‑based routing rules.

Service‑mesh inter‑service routing : The mesh (e.g., Polaris, Istio) propagates the request color and routes calls between services according to the same label criteria.

Release‑Phase Strategies

Three incremental rollout patterns are commonly used to shift traffic from version V1 to V2 safely.

Canary release : Gradually increase the proportion of instances receiving V2 traffic (e.g., 10 % → 30 % → 100 %).

Rolling release : Upgrade instances in batches; each batch is verified before proceeding to the next.

Blue‑Green release : Deploy a full parallel environment ( V2) alongside the production environment ( V1). After exhaustive testing, switch all traffic via the load balancer.

Full‑Link Gray Release

A dedicated “gray” environment mirrors the entire production stack. Instances are labeled gray. The gateway applies dynamic or static shading so that only traffic matching gray criteria reaches the new version, while all other traffic continues to the stable environment.

Production‑Phase Practices

Multi‑Active Disaster Recovery

Deploy the same service across multiple Availability Zones (AZs). One AZ hosts the primary (read/write) database; another hosts a replica (read‑only). The gateway distributes traffic proportionally. If an AZ fails, traffic is automatically rerouted to the surviving zone, preserving continuity.

Near‑by Access (Geographic Routing)

Label service instances with geographic regions (e.g., guangzhou, shanghai). The service mesh routes user requests to the nearest region, reducing latency.

Unitized Architecture (SET)

Group services into Service Execution Territories (SETs). Instances are tagged per SET, and routing rules keep traffic within the same SET, providing strong isolation and simplifying fault containment.

Rate Limiting

Ingress‑layer limiting : The gateway enforces request‑per‑second quotas, dropping excess traffic or queuing it for later processing.

Inter‑service limiting : The mesh shares a global quota among all instances of a service, enabling distributed throttling.

Typical Cloud‑Native Microservice Stack

Requests enter a cloud‑native gateway (Kong or Nginx) that provides load balancing, security routing, and rate limiting. The gateway forwards traffic to elastic microservice instances managed by Tencent Service Engine (TSE). Core components:

Service registry : Enables service discovery.

Configuration center : Stores runtime configuration (supports Apollo, Console, etc.).

Service‑governance platform : Provides dynamic routing, circuit breaking, observability, and distributed rate limiting (Polaris).

Tencent Service Engine (TSE)

TSE bundles an open‑source‑compatible gateway, registration & configuration center, Polaris‑based service governance, and an elastic microservice runtime that supports serverless‑style scaling.

Gateway: Kong/Nginx based, handles CLB, security, and rate limiting.

Config center: Compatible with Apollo, Console, etc.

Service governance: Polaris, used by large‑scale services such as WeChat Pay and King of Glory.

Elastic microservice: Automatic scaling of instances based on load.

cloud-nativetraffic routingmulti-activegray-releaseservice-meshrate-limiting
Tencent Cloud Middleware
Written by

Tencent Cloud Middleware

Official account of Tencent Cloud Middleware. Focuses on microservices, messaging middleware and other cloud‑native technology trends, publishing product updates, case studies, and technical insights. Regularly hosts tech salons to share effective solutions.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.