Multi‑Cluster Deployment and Traffic‑Lane Solution with Alibaba Cloud Service Mesh ASM
This guide explains how to use Alibaba Cloud Service Mesh (ASM) to create isolated, on‑demand environments for cloud‑native microservices across multiple ACK clusters, leveraging traffic‑lane (permissive mode) and OpenTelemetry automatic instrumentation to achieve efficient development, testing, and progressive gray‑release workflows while reducing resource consumption.
In a rapidly evolving cloud‑native microservice landscape, managing agile release cycles and provisioning development and testing environments for each service becomes a major challenge. The article presents a solution based on Alibaba Cloud Service Mesh (ASM) that uses multi‑cluster deployment and traffic‑lane (permissive mode) to provide isolated, on‑demand environments, reducing cost and increasing efficiency.
Prerequisites
ASM Enterprise or Premium instance (v1.21.6.54+).
Two ACK clusters added to the same ASM instance (one for production, one for development).
Ingress gateways ingressgateway and ingressgateway-dev created in both clusters.
Gateway rules (Gateway CR) for the two gateways.
All the above can be created following the referenced Alibaba Cloud documentation links.
Step 1 – Deploy OpenTelemetry Operator
kubectl create namespace opentelemetry-operator-system helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm install \
--namespace=opentelemetry-operator-system \
--version=0.46.0 \
--set admissionWebhooks.certManager.enabled=false \
--set admissionWebhooks.certManager.autoGenerateCert=true \
--set manager.image.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/opentelemetry-operator" \
--set manager.image.tag="0.92.1" \
--set kubeRBACProxy.image.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/kube-rbac-proxy" \
--set kubeRBACProxy.image.tag="v0.13.1" \
--set manager.collectorImage.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/opentelemetry-collector" \
--set manager.collectorImage.tag="0.97.0" \
--set manager.opampBridgeImage.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/operator-opamp-bridge" \
--set manager.opampBridgeImage.tag="0.97.0" \
--set manager.targetAllocatorImage.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/target-allocator" \
--set manager.targetAllocatorImage.tag="0.97.0" \
--set manager.autoInstrumentationImage.java.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/autoinstrumentation-java" \
--set manager.autoInstrumentationImage.java.tag="1.32.1" \
--set manager.autoInstrumentationImage.nodejs.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/autoinstrumentation-nodejs" \
--set manager.autoInstrumentationImage.nodejs.tag="0.49.1" \
--set manager.autoInstrumentationImage.python.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/autoinstrumentation-python" \
--set manager.autoInstrumentationImage.python.tag="0.44b0" \
--set manager.autoInstrumentationImage.dotnet.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/autoinstrumentation-dotnet" \
--set manager.autoInstrumentationImage.dotnet.tag="1.2.0" \
--set manager.autoInstrumentationImage.go.repository="registry-cn-hangzhou.ack.aliyuncs.com/acs/opentelemetry-go-instrumentation" \
--set manager.autoInstrumentationImage.go.tag="v0.10.1.alpha-2-aliyun" \
opentelemetry-operator open-telemetry/opentelemetry-operator kubectl get pod -n opentelemetry-operator-systemExpected output shows the operator pod running.
Step 2 – Enable automatic instrumentation
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: demo-instrumentation
spec:
propagators:
- baggage
sampler:
type: parentbased_traceidratio
argument: "1" kubectl apply -f instrumentation.yamlThis injects OpenTelemetry sidecars into pods that have the annotation instrumentation.opentelemetry.io/inject-java: "true" , enabling Baggage propagation without modifying business code.
Step 3 – Deploy the initial stable version (v1)
apiVersion: v1
kind: Service
metadata:
name: mocka
labels:
app: mocka
service: mocka
spec:
ports:
- port: 8000
name: http
selector:
app: mocka
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mocka-v1
labels:
app: mocka
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: mocka
version: v1
ASM_TRAFFIC_TAG: v1
template:
metadata:
labels:
app: mocka
version: v1
ASM_TRAFFIC_TAG: v1
annotations:
instrumentation.opentelemetry.io/inject-java: "true"
instrumentation.opentelemetry.io/container-names: "default"
spec:
containers:
- name: default
image: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-mock:v0.1-java
imagePullPolicy: IfNotPresent
env:
- name: version
value: v1
- name: app
value: mocka
- name: upstream_url
value: "http://mockb:8000/"
ports:
- containerPort: 8000
---
# (similar Service and Deployment definitions for mockb and mockc) kubectl apply -f mock.yamlAll pods receive the OpenTelemetry sidecar and expose the version: v1 label.
Step 4 – Define Swim‑Lane Group and Swim‑Lane for v1
# Swim‑Lane Group
apiVersion: istio.alibabacloud.com/v1
kind: ASMSwimLaneGroup
metadata:
name: mock
spec:
ingress:
gateway:
name: ingressgateway
namespace: istio-system
type: ASM
isPermissive: true
permissiveModeConfiguration:
routeHeader: version
serviceLevelFallback:
default/mocka: v1
default/mockb: v1
default/mockc: v1
traceHeader: baggage
services:
- cluster:
id:
name: mocka
namespace: default
- cluster:
id:
name: mockb
namespace: default
- cluster:
id:
name: mockc
namespace: default
---
# Swim‑Lane for v1
apiVersion: istio.alibabacloud.com/v1
kind: ASMSwimLane
metadata:
labels:
swimlane-group: mock
name: v1
spec:
labelSelector:
version: v1
services:
- name: mocka
namespace: default
- name: mockb
namespace: default
- name: mockc
namespace: default kubectl apply -f swimlane-v1.yamlStep 5 – Expose the application via a VirtualService
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: swimlane-ingress-vs
namespace: istio-system
spec:
gateways:
- istio-system/ingressgateway
hosts:
- '*'
http:
- route:
- destination:
host: mocka.default.svc.cluster.local
subset: v1
headers:
request:
set:
version: v1 kubectl apply -f ingress-vs.yamlNow a request to the gateway returns a chain like mocka(v1) → mockb(v1) → mockc(v1) .
Step 6 – Create Development Swim‑Lanes (Alice and Caros)
# Alice's dev deployment and swim‑lane
apiVersion: apps/v1
kind: Deployment
metadata:
name: mocka-dev-alice
labels:
app: mocka
version: dev-alice
spec:
replicas: 1
selector:
matchLabels:
app: mocka
version: dev-alice
template:
metadata:
labels:
app: mocka
version: dev-alice
annotations:
instrumentation.opentelemetry.io/inject-java: "true"
instrumentation.opentelemetry.io/container-names: "default"
spec:
containers:
- name: default
image: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-mock:v0.1-java
env:
- name: version
value: dev-alice
- name: app
value: mocka
- name: upstream_url
value: "http://mockb:8000/"
ports:
- containerPort: 8000
---
apiVersion: istio.alibabacloud.com/v1
kind: ASMSwimLane
metadata:
labels:
swimlane-group: mock
name: dev-alice
spec:
labelSelector:
version: dev-alice
services:
- name: mocka
namespace: default
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: swimlane-ingress-vs-alice
namespace: istio-system
spec:
gateways:
- istio-system/ingressgateway-dev
hosts:
- '*'
http:
- match:
- headers:
alice-dev:
exact: "true"
name: route-alice
route:
- destination:
host: mocka.default.svc.cluster.local
subset: dev-alice
headers:
request:
set:
version: dev-alice kubectl --kubeconfig ~/.kube/config2 apply -f alice-dev.yamlSimilarly, Caros creates dev-caros swim‑lane and virtual service, enabling isolated testing via request headers alice-dev: true or caros-dev: true .
Step 7 – Gray‑Release v2 (partial update of mocka and mockc)
# Deploy v2 workloads
apiVersion: apps/v1
kind: Deployment
metadata:
name: mocka-v2
labels:
app: mocka
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: mocka
version: v2
ASM_TRAFFIC_TAG: v2
template:
metadata:
labels:
app: mocka
version: v2
ASM_TRAFFIC_TAG: v2
annotations:
instrumentation.opentelemetry.io/inject-java: "true"
instrumentation.opentelemetry.io/container-names: "default"
spec:
containers:
- name: default
image: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-mock:v0.1-java
env:
- name: version
value: v2
- name: app
value: mocka
- name: upstream_url
value: "http://mockb:8000/"
ports:
- containerPort: 8000
---
# mockc‑v2 similar definition kubectl apply -f mock-v2.yaml # v2 swim‑lane (only mocka and mockc)
apiVersion: istio.alibabacloud.com/v1
kind: ASMSwimLane
metadata:
labels:
swimlane-group: mock
name: v2
spec:
labelSelector:
version: v2
services:
- name: mocka
namespace: default
- name: mockc
namespace: default kubectl apply -f swimlane-v2.yamlUpdate the VirtualService to split traffic 80 % to v1 and 20 % to v2:
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: swimlane-ingress-vs
namespace: istio-system
spec:
gateways:
- istio-system/ingressgateway
hosts:
- '*'
http:
- route:
- destination:
host: mocka.default.svc.cluster.local
subset: v1
weight: 80
headers:
request:
set:
version: v1
- destination:
host: mocka.default.svc.cluster.local
subset: v2
weight: 20
headers:
request:
set:
version: v2Running a loop of curl calls shows both v1‑>v1‑>v1 and v2‑>v1‑>v2 chains.
Step 8 – Promote v2 to baseline and retire v1
# Update swim‑lane group baseline
apiVersion: istio.alibabacloud.com/v1
kind: ASMSwimLaneGroup
metadata:
name: mock
spec:
ingress:
gateway:
name: ingressgateway
namespace: istio-system
type: ASM
isPermissive: true
permissiveModeConfiguration:
routeHeader: version
serviceLevelFallback:
default/mocka: v2
default/mockb: v1
default/mockc: v2
traceHeader: baggage
services:
- cluster:
id:
name: mocka
namespace: default
- cluster:
id:
name: mockb
namespace: default
- cluster:
id:
name: mockc
namespace: default
---
# Reduce v1 swim‑lane to only mockb
apiVersion: istio.alibabacloud.com/v1
kind: ASMSwimLane
metadata:
labels:
swimlane-group: mock
name: v1
spec:
labelSelector:
version: v1
services:
- name: mockb
namespace: default kubectl apply -f swimlane-v1.yamlAfter the baseline change, all production traffic follows v2‑>v1‑>v2 . The remaining v1 deployment of mockb can be deleted.
Step 9 – Iterate to v3 (only mockb changes)
# Deploy mockb‑v3
apiVersion: apps/v1
kind: Deployment
metadata:
name: mockb-v3
labels:
app: mockb
version: v3
spec:
replicas: 1
selector:
matchLabels:
app: mockb
version: v3
ASM_TRAFFIC_TAG: v3
template:
metadata:
labels:
app: mockb
version: v3
ASM_TRAFFIC_TAG: v3
annotations:
instrumentation.opentelemetry.io/inject-java: "true"
instrumentation.opentelemetry.io/container-names: "default"
spec:
containers:
- name: default
image: registry-cn-hangzhou.ack.aliyuncs.com/acs/asm-mock:v0.1-java
env:
- name: version
value: v3
- name: app
value: mockb
- name: upstream_url
value: "http://mockc:8000/"
ports:
- containerPort: 8000 kubectl apply -f mock-v3.yaml # v3 swim‑lane (only mockb)
apiVersion: istio.alibabacloud.com/v1
kind: ASMSwimLane
metadata:
labels:
swimlane-group: mock
name: v3
spec:
labelSelector:
version: v3
services:
- name: mockb
namespace: default kubectl apply -f swimlane-v3.yamlModify the VirtualService to route all traffic to the new baseline (v2 for mocka/mockc, v3 for mockb):
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: swimlane-ingress-vs
namespace: istio-system
spec:
gateways:
- istio-system/ingressgateway
hosts:
- '*'
http:
- route:
- destination:
host: mocka.default.svc.cluster.local
subset: v2
headers:
request:
set:
version: v3After applying the updated swim‑lane group baseline (mockb → v3), the production chain becomes v2‑>v3‑>v2 , and the remaining v1 resources are removed.
Conclusion
The article demonstrates a complete end‑to‑end workflow for cloud‑native microservice applications using Alibaba Cloud Service Mesh ASM: multi‑cluster management, traffic‑lane isolation, automatic OpenTelemetry instrumentation, progressive gray‑release, and clean baseline promotion, all driven by declarative YAML and without code changes.
Alibaba Cloud Infrastructure
For uninterrupted computing services
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.