Build a Private MCP Gateway with Higress & Nacos on Kubernetes – No Helm, No Internet
This guide details a private MCP gateway architecture using open‑source Higress as the MCP proxy and Nacos as the registry, enabling dynamic tool registration, real‑time Prompt updates, multi‑tenant isolation, and deployment on an air‑gapped Kubernetes cluster without Helm.
Background
In the era of AI assistants, simple question‑answering is insufficient; agents need to invoke existing services securely. The MCP protocol has become a de‑facto standard for model‑to‑service communication, and this article presents a private MCP gateway solution built on Higress and Nacos.
Key Challenges
Maintaining session state for high‑availability instances when using SSE communication.
Allowing rapid, dynamic updates of MCP tool Prompts for fast debugging and verification.
Providing tenant‑level isolation and authentication in multi‑tenant cloud‑service scenarios.
Architecture Overview
Higress acts as the MCP Proxy while Nacos serves as the MCP Registry . Together they solve the three challenges above; the authentication details are covered in a later section.
Private Deployment on an Air‑Gapped Kubernetes Cluster
Both Higress and Nacos are cloud‑native applications, so they are deployed in a K8s cluster. Because the production network has no external access and Helm cannot be used, the deployment relies on Docker images and custom Dockerfiles.
FROM higress-registry.cn-hangzhou.cr.aliyuncs.com/higress/all-in-one:latestThe all‑in‑one image runs all Higress components in a single pod, enabling HA via multiple pod replicas. However, the default image expects to download WASM plugins from an OCI registry, which fails in isolated environments.
WASM Plugin Independent Deployment
To avoid external OCI pulls, the plugin server is deployed separately and its HTTP download URL is configured in the Higress Dockerfile.
FROM higress-registry.cn-hangzhou.cr.aliyuncs.com/higress/plugin-server:1.0.0 apiVersion: v1
kind: Service
metadata:
name: higress-plugin-server
namespace: higress-system
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: higress-plugin-server
higress: higress-plugin-serverThe DNS format for services inside the cluster is <service-name>.<namespace>.svc.cluster.local. In non‑K8s environments a VIP or SLB can be used instead.
Configuring Custom Plugin URLs
ENV HIGRESS_ADMIN_WASM_PLUGIN_CUSTOM_IMAGE_URL_PATTERN=http://[k8s-service]/plugins/${name}/${version}/plugin.wasm
ENV MCP_SERVER_WASM_IMAGE_URL=http://[k8s-service]/plugins/mcp-server/1.0.0/plugin.wasmAfter rebuilding the image with these environment variables, Higress listens on ports 8080 and 8443, confirming that the core data‑plane components are operational.
Sticky Sessions for SSE
Higress leverages Redis to maintain sticky sessions required by MCP’s SSE communication. The Redis connection is added to the Higress configuration and redeployed.
mcpServer:
enable: true
sse_path_suffix: /sse
redis:
address: xxx.redis.zhangbei.rds.aliyuncs.com:6379
username: ""
password: "xxx"
db: 0Nacos Cluster Deployment
Nacos provides service registration and metadata storage for MCP tools. A three‑node Raft‑based cluster is required for production reliability.
FROM nacos-registry.cn-hangzhou.cr.aliyuncs.com/nacos/nacos-server:latestCluster members are defined via the cluster.conf file. To avoid static IP lists, a headless Service and a custom startup script dynamically generate cluster.conf by querying the service DNS.
# Example of dynamic cluster.conf generation
HEADLESS_SERVICE_FQDN="nacos-headless.mcp-nacos.svc.cluster.local"
CLUSTER_CONF_FILE="/home/nacos/conf/cluster.conf"
UPDATE_SCRIPT="/home/nacos/bin/update-cluster.sh"
# ... script reads nslookup output and writes IP:port lines ...The headless Service definition:
apiVersion: v1
kind: Service
metadata:
name: nacos-headless
namespace: mcp-nacos
spec:
clusterIP: None
ports:
- name: peer-finder-port
port: 8848
targetPort: 8848
selector:
app: mcp-nacosService Exposure for Higress
Higress needs to communicate with Nacos via gRPC, so a ClusterIP Service exposing ports 8848 (Nacos API) and 9848 (gRPC) is created.
apiVersion: v1
kind: Service
metadata:
name: pre-oss-mcp-nacos-endpoint
namespace: aso-oss-mcp-nacos
spec:
type: ClusterIP
ports:
- name: subscribe-port
port: 8848
targetPort: 8848
- name: grpc-port
port: 9848
targetPort: 9848
selector:
app: nacosAuthentication
Higress offers built‑in authentication plugins, but the solution prefers to let each downstream service handle its own auth. The gateway simply forwards authentication data (e.g., cookies, tokens) to the service, avoiding credential storage in the gateway.
MCP Validation Workflow
Register a service in Nacos and configure its MCP tool metadata.
Configure Higress to use the Nacos registry as the MCP source.
Expose the Higress service address and URI to client tools (e.g., Cursor/Cherry Studio) and invoke the MCP tool.
Example request template for a simple GET tool:
{
"requestTemplate": {
"url": "/xxx/list.json",
"method": "GET",
"argsToUrlParam": true
},
"responseTemplate": {
"body": "{{.}}"
}
}Additional Diagrams
The article includes architecture, logical module, and sequence diagrams illustrating traffic flow, tool isolation via URI routing, and failover handling. (Images retained for reference.)
Alibaba Cloud Developer
Alibaba's official tech channel, featuring all of its technology innovations.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
