Why Cloudflare Replaced Nginx with Pingora: Inside the Next‑Gen Proxy Architecture
This article examines Cloudflare's decision to abandon Nginx for its home‑grown Pingora proxy, detailing the architectural limits of Nginx, the design choices behind Pingora, performance gains, added features, and the broader implications for large‑scale HTTP traffic handling.
Introduction
Cloudflare replaced its Nginx‑based edge proxy with a home‑grown Rust proxy called Pingora. The change was driven by the need to handle >1 trillion daily client requests while reducing CPU and memory usage.
Why a New Proxy Was Needed
Architectural limits of Nginx
Nginx uses a worker‑process model where each request is bound to a single worker. This creates load imbalance across CPU cores and prevents efficient sharing of connection pools. When the number of workers grows, connections are fragmented, increasing hardware consumption and latency.
Difficulties adding required features
Complex scenarios such as retrying a request with a different header set require source‑code changes in Nginx. Nginx is written in C without memory‑safety guarantees; extending it with Lua adds runtime overhead and lacks static type checking. The community is relatively closed, making rapid innovation hard.
Evaluation of Alternatives
Continue investing in Nginx and pay for customizations – labor‑intensive.
Migrate to a third‑party proxy (e.g., Envoy) – risk of repeating the same cycle.
Build an internal platform from scratch – highest upfront engineering effort but best long‑term ROI.
Pingora Project
Design Decisions
Pingora is implemented in Rust to obtain C‑level performance with compile‑time memory safety. Instead of reusing existing HTTP libraries such as hyper, Cloudflare built a custom HTTP library to handle many non‑RFC‑compliant traffic patterns encountered on the open Internet.
Workload scheduling uses a multithreaded Tokio runtime with work‑stealing, allowing all threads to share a single connection pool.
Pingora exposes a programmable request‑lifecycle API similar to Nginx/OpenResty. Developers can register request‑filter callbacks that run when headers are received, enabling modification or rejection of traffic without touching core proxy code.
Performance in Production
Measured median time‑to‑first‑byte (TTFB) improved by 5 ms and the 95th‑percentile by 80 ms compared with the legacy stack. Because connections are shared across threads, connection‑reuse rates rose from 87.1 % to 99.92 % for a major client, reducing new connections by a factor of 160 and saving an estimated 434 years of TLS handshake time per day.
New‑connection rate per second dropped to roughly one‑third of the previous service.
Additional Features
Pingora’s API allowed rapid addition of HTTP/2 upstream support, paving the way for gRPC delivery. A “Cache Reserve” feature integrates Cloudflare R2 storage as a caching layer.
Efficiency and Resource Usage
Under identical traffic loads Pingora consumes about 70 % less CPU and 67 % less memory than the previous Lua‑based implementation. The multithreaded model eliminates per‑thread mutexes required by Nginx’s shared memory and reduces the overhead of copying HTTP headers between C structures and Lua strings.
Safety Guarantees
Rust’s ownership model eliminates most classes of memory‑unsafe bugs. In millions of processed requests Pingora has not experienced a crash attributable to its own code; observed failures were traced to hardware or kernel issues.
Conclusion
Pingora provides a faster, more efficient, and extensible edge proxy that serves as the foundation for current and future Cloudflare products. Cloudflare plans to open‑source the project after further maturation.
Code example
--完--Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Architect's Guide
Dedicated to sharing programmer-architect skills—Java backend, system, microservice, and distributed architectures—to help you become a senior architect.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
