Why Cloudflare Replaced Nginx with Pingora: Inside Its High‑Performance Rust Proxy
Cloudflare built Pingora, a Rust‑based HTTP proxy that processes over a trillion daily requests, to overcome Nginx's architectural limits, achieve higher performance, lower resource usage, and add features unavailable in the legacy stack.
Last year Cloudflare announced it would retire Nginx in favor of its home‑grown, next‑generation HTTP proxy Pingora, claiming it is faster, more efficient, and more secure.
Pingora is a Rust‑implemented HTTP proxy handling more than one trillion requests per day, delivering higher performance while using only a third of the CPU and memory of the previous proxy infrastructure.
Why Build a New Proxy
As the world’s largest free CDN, Cloudflare’s edge layer processes the highest volume of web requests. The existing Nginx‑based architecture reached limits in performance, scalability, and feature support, prompting the need for a custom solution.
Architectural Limits Hurt Performance
Nginx’s worker‑process model assigns each request to a single worker, causing load imbalance across CPU cores and slowing overall speed. Connection reuse suffers because each worker maintains its own connection pool, leading to inefficient TCP/TLS handshakes and higher latency.
Even with optimizations, the fundamental worker‑process design cannot fully resolve these issues.
Some Features Are Hard to Add
Complex business requirements, such as retrying requests with different header sets, are not supported by Nginx without extensive source modifications. Additionally, Nginx’s C codebase lacks memory safety, increasing the risk of bugs and crashes.
Choosing to Build Our Own
Cloudflare evaluated three options: continue investing in a customized Nginx, migrate to another third‑party proxy (e.g., Envoy), or build an internal platform from scratch. Over several quarters, the team concluded that building a bespoke proxy offered the best long‑term ROI.
Pingora Project
Design Decisions
Rust was chosen for its memory‑safety guarantees without sacrificing performance. Instead of relying on existing HTTP libraries like hyper, Cloudflare built its own library to maximize flexibility and support non‑standard HTTP traffic seen on the internet.
The runtime uses a multithreaded model with Tokio, enabling efficient connection pooling and work‑stealing across threads.
Pingora provides a programmable request‑lifecycle API similar to Nginx/OpenResty, allowing developers to write request filters that can modify or reject traffic.
Pingora Is Faster in Production
Median TTFB improved by 5 ms and the 95th percentile by 80 ms, mainly due to shared‑across‑threads connection pools that increase reuse and reduce handshake overhead.
New connections per second dropped to one‑third of the legacy service, and connection‑reuse rates rose from 87.1 % to 99.92 %, saving an estimated 434 years of handshake time per day across all customers.
More Features
The developer‑friendly interface has enabled rapid addition of core capabilities such as HTTP/2 upstream support, paving the way for gRPC, and the Cache Reserve feature that uses R2 storage as a caching layer.
Higher Efficiency
Compared with the previous Lua‑based stack, Pingora’s Rust implementation reduces CPU and memory consumption by roughly 70 % and 67 % under the same load. Direct string access eliminates costly Lua‑C data copying, and the multithreaded model avoids mutex‑heavy shared memory used by Nginx.
Greater Security
Rust’s memory‑safety semantics protect against undefined behavior, allowing engineers to iterate faster without fearing crashes. Since its launch, Pingora has processed millions of requests without a single service‑code‑induced crash.
Conclusion
Cloudflare has deployed Pingora as a faster, more efficient, and more extensible internal proxy platform that will serve current and future products, with plans to open‑source it after further maturation.
Selected Java Interview Questions
A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.