How to Configure Global Rate Limiting with Aeraki Mesh in Istio
This tutorial explains how to set up Aeraki Mesh's global rate limiting for Dubbo and Thrift services in an Istio service mesh, covering the deployment of a rate‑limit server, configuration of rate‑limit rules, enabling limits via MetaRouter, and verifying the behavior with command‑line tools.
Installation of Example Programs
If you have not installed the example programs, follow the quick‑start guide to install Aeraki, Istio, and the sample applications. After installation, two namespaces ( meta-dubbo and meta-thrift) appear, each containing a Dubbo or Thrift example built on MetaProtocol.
➜ ~ kubectl get ns | grep meta
meta-dubbo Active 16m
meta-thrift Active 16mWhat Is Global Rate Limiting?
Global rate limiting shares a single quota among all service instances. A dedicated rate‑limit server evaluates requests on behalf of every instance. When a request arrives, the sidecar proxy sends a query to the rate‑limit server, which checks the configured rules and returns a decision that the proxy uses to allow or reject the request.
When to Use Global Rate Limiting
Global rate limiting centralizes decision‑making, so the limit does not depend on the number of instances. However, it adds an extra network hop and can become a bottleneck under heavy traffic, making deployment and management more complex than local limiting.
If the goal is to keep each instance’s load within a reasonable range, local rate limiting is preferred because it enforces limits per sidecar, offering finer‑grained control and easier horizontal scaling with HPA.
If you need a uniform policy across all instances—e.g., limiting request frequency based on user tier—global rate limiting is appropriate. Docker Hub’s tier‑based pull‑rate limits are a real‑world example.
Deploying the Rate‑Limit Server
The demo already includes a rate‑limit server, so no extra deployment is required. The server’s configuration file defines the rules. The following snippet limits the sayHello method to 5 requests per minute:
domain: production
descriptors:
- key: method
value: "sayHello"
rate_limit:
unit: minute
requests_per_unit: 5Related scripts are available at https://github.com/aeraki-mesh/aeraki/tree/master/demo/metaprotocol-thrift/rate-limit-server
Enabling Rate Limiting for a Service
Use a MetaRouter resource to activate global rate limiting. The sidecar proxy will query the rate‑limit server for each request and act based on the response. The following MetaRouter limits the sayHello method of the thrift‑sample‑server.meta-thrift.svc.cluster.local service. The domain must match the server configuration, and the match clause ensures only the specified method is sent for rate‑limit evaluation.
apiVersion: metaprotocol.aeraki.io/v1alpha1
kind: MetaRouter
metadata:
name: test-metaprotocol-thrift-route
namespace: meta-thrift
spec:
hosts:
- thrift-sample-server.meta-thrift.svc.cluster.local
globalRateLimit:
domain: production
match:
attributes:
method:
exact: sayHello
rateLimitService: outbound|8081||rate-limit-server.meta-thrift.svc.cluster.local
requestTimeout: 100ms
denyOnFail: true
descriptors:
- property: method
descriptorKey: method
EOFObserving Rate Limiting
After applying the configuration, you can observe the limit with the aerakictl command. The client can only successfully invoke the method five times per minute; subsequent attempts are rejected with a rate‑limit error.
➜ ~ aerakictl_app_log client meta-thrift -f --tail 10
Hello Aeraki, response from thrift-sample-server-v1-5c8476684-842l6/172.17.0.40
Hello Aeraki, response from thrift-sample-server-v2-6d5bcc885-hpx7n/172.17.0.41
... (successful responses)
org.apache.thrift.TApplicationException: meta protocol local rate limit: request '6' has been rate limited
... (subsequent requests fail)Understanding the Underlying Mechanism
In the sidecar’s configuration, Aeraki injects a MetaProtocol Proxy filter into the inbound listener. The MetaRouter rules are translated into a global rate‑limit filter configuration, which the proxy uses to communicate with the rate‑limit server.
You can view the sidecar configuration with:
aerakictl_sidecar_config server-v1 meta-thrift | fxRelevant excerpt of the inbound listener’s MetaProtocol Proxy configuration:
{
"name": "envoy.filters.network.meta_protocol_proxy",
"typed_config": {
"@type": "type.googleapis.com/udpa.type.v1.TypedStruct",
"type_url": "type.googleapis.com/aeraki.meta_protocol_proxy.v1alpha.MetaProtocolProxy",
"value": {
"stat_prefix": "inbound|9090||",
"application_protocol": "thrift",
"route_config": {
"name": "inbound|9090||",
"routes": [{
"route": {"cluster": "inbound|9090||"}
}]
},
"codec": {"name": "aeraki.meta_protocol.codec.thrift"},
"meta_protocol_filters": [
{
"name": "aeraki.meta_protocol.filters.ratelimit",
"config": {
"@type": "type.googleapis.com/aeraki.meta_protocol_proxy.filters.ratelimit.v1alpha.RateLimit",
"match": {"metadata": [{"name": "method", "exact_match": "sayHello"}]},
"domain": "production",
"timeout": "0.100s",
"failure_mode_deny": true,
"rate_limit_service": {
"grpc_service": {
"envoy_grpc": {"cluster_name": "outbound|8081||rate-limit-server.meta-thrift.svc.cluster.local"}
}
},
"descriptors": [{"property": "method", "descriptor_key": "method"}]
}
},
{"name": "aeraki.meta_protocol.filters.router"}
]
}
}
}This configuration shows how the sidecar forwards matching requests to the rate‑limit server and applies the decision to allow or deny traffic.
Cloud Native Technology Community
The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
