Building a Kubernetes Ingress Gateway with Pingora (Rust)
This article explains how to use Cloudflare's open‑source Pingora Rust framework to implement a Kubernetes ingress gateway, covering objectives, architecture, code examples for an ingress watcher and router control, deployment steps, and custom configuration integration.
Preface
Reverse‑proxy gateways typically use Nginx, but extending it for custom business needs can be problematic, whether modifying the C source or writing Lua scripts, especially regarding security and resource efficiency.
Our team previously built a Rust‑based gateway for traffic, permission, and security management, and now we aim to create a K8s gateway using the newly open‑sourced Pingora.
Goals
Implement basic reverse‑proxy features (HTTP/1, HTTP/2, gRPC, WebSocket).
Automatically and smoothly update routes by watching Ingress resources.
Provide a simple, easy‑to‑use middleware design.
Offer engineering capabilities such as monitoring, traffic control, and security.
Pingora Introduction
Pingora is developed by cloudflare , a leading global network service provider offering premium CDN and DDoS solutions.
Since 2022 Cloudflare has been replacing Nginx with Pingora.
Pingora processes over 10 trillion internet requests daily, delivering new features while using only one‑third of the CPU and memory of traditional proxy infrastructures.
Pingora is a Rust framework to build fast, reliable and programmable networked systems. Pingora is battle‑tested as it has been serving more than 40 million Internet requests per second for more than a few years.
K8s & Ingress Introduction
Kubernetes is the dominant container orchestration platform, and Ingress objects define how a gateway proxies traffic to services.
Tools like Nginx and Istio watch Ingress changes to dynamically adjust routing.
Architecture Design
Overall onion‑style layering of handles.
Monitoring, events, and security modules are integrated via handles.
Ingress watcher dynamically adjusts routing structure.
A controller is provided for gateway interaction.
Code Implementation
We will not implement the entire architecture now; we start with an MVP version.
Ingress watcher
Watch mechanism monitors Ingress changes; code link below.
impl WatchIngress {
//开始监听
pub async fn start_watch(&self) -> anyhow::Result
> {
// 将ingress的变化发送到channel中,这里是一个异构设计
let (sender, receiver) = async_channel::bounded(8);
// 创建k8s客户端
let client = Client::try_default().await?;
let api: Api
= xxx
...
// 创建一个watcher
...
let mut watch = watcher(api, wc)
.default_backoff().boxed();
tokio::spawn(async move {
while let Some(result) = watch.next().await {
... //循环发送配置变化
}
});
Ok(receiver)
}
}Router Control
Code link below.
impl HttpProxyControl {
//从channel中接受事件,使用ing_event_to_router函数处理,
//主要功能就是根据事件更新路由
fn ing_event_to_router(ing: IngressEvent, acl: Acl
>) {
...
//处理ingress事件
match ty {
1 | 2 => { //init | update
... //创建或者更新路由
}
3 => { //delete
... //删除路由
}
...
}
// 处理sni
for (host, i) in sni.sni {
...
}
//通过acl指针,无锁更新路由
acl.update(move |_| {
map
});
}
}Routing Design
Ingress currently supports three routing strategies: fixed, prefix, and special. We initially support fixed and prefix.
Fixed mode uses a simple map.
Prefix mode employs a classic compressed trie structure (code link omitted).
Pingora Startup
No load balancing is needed; we only need to locate the correct service.
pub fn start_pingora() {
...
let mut my_server = Server::new(Some(Opt::default())).unwrap();
my_server.bootstrap();
let mut gateway = http_proxy_service(&my_server.configuration, hpc);
gateway.add_tcp(format!("0.0.0.0:{}", cfg.port).as_str());
my_server.add_service(gateway);
my_server.run_forever();
}Usage
Deploy a Kubernetes cluster, create a namespace (e.g., qa ), and launch a simple echo service for testing.
Deploy Services
Create a ServiceAccount so pods can access K8s resources:
kubectl apply -f ./deploy/role.yaml -n qaDeploy the pingora‑ingress‑ctl deployment (image pre‑built):
kubectl apply -f ./deploy/role.yaml -n qaCreate an Ingress that routes /api/v1 to the echo service:
kubectl apply -f ./deploy/ingress.yaml -n qaExpose the service externally by creating a Service:
kubectl apply -f ./deploy/pingora-ingress-ctl-src.yaml -n qaExperience
Send a request and see the echo response:
//请求
curl --location --request GET 'http://test.com:30003/api/v1/greet/hello?content=world'
//回复
{"response": "Get [test-server]---> request=hello query=world"}Changing the route from v1 to v2 results in a 404 response.
Custom Configuration
Inject custom Pingora configuration via a ConfigMap:
---
version: 1
threads: 2
pid_file: /tmp/load_balancer.pid
error_log: /tmp/load_balancer_err.log
upgrade_sock: /tmp/load_balancer.sockApply the ConfigMap:
kubectl apply -f ./deploy/config_map.yaml -n qaMount the configuration into the pod by modifying the deployment YAML (excerpt shown):
...
spec:
...
spec:
containers:
# 在启动命中指定配置文件路径
- args:
- '-c'
- /config/config.yaml
command:
- ./pingora-ingress
image: wdshihaoren/pingora-ingress:14294998
...
volumeMounts:
- mountPath: config
name: config
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
- configMap:
defaultMode: 420
name: pingora-ingress-ctl-cm
name: configAfter updating, redeploy the changes.
Rare Earth Juejin Tech Community
Juejin, a tech community that helps developers grow.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.