Nginx vs Envoy: Real‑World Performance Benchmark and Deployment Guide
This article translates and expands Anton Putra's benchmark, detailing how to deploy Nginx and Envoy on AWS with Terraform and Ansible, run HTTP, HTTPS, and gRPC load tests using K6, measure CPU and latency with Prometheus, and compare the resulting throughput and stability of both proxies.
Overview
Envoy is a high‑performance edge proxy designed for service‑mesh deployments. It can run on virtual machines or as a sidecar container in Kubernetes pods. Nginx originated as a web server but also functions as a reverse proxy. Both support HTTP/2, gRPC, load balancing, and common proxy features.
Test Infrastructure
Infrastructure is provisioned on AWS using Terraform to create a VPC, subnets, and security groups, and Ansible to launch EC2 instances. Three benchmark scenarios are executed:
Plain HTTP traffic.
HTTPS with TLS termination at the proxy.
gRPC with TLS termination at the proxy.
The backend service is written in Go, using the fasthttp library for HTTP and the official gRPC SDK for gRPC endpoints.
Metrics collection:
CPU usage is recorded with Prometheus and Node Exporter.
Request counts are exported as Prometheus metrics via OpenTelemetry.
Load is generated with the k6 tool.
Source code and deployment scripts are available at https://github.com/antonputra/tutorials/tree/main/lessons/151
Envoy Configuration
Envoy’s static configuration is expressed in YAML/JSON. Key settings for the three tests are:
HTTP test : expose port 80, enable the admin interface for Prometheus metrics, and route all traffic to the my_app cluster (single endpoint).
HTTPS test : expose port 443, add a transport_socket section with TLS 1.3 certificates and private key, and explicitly enable HTTP/2.
gRPC test : expose port 8443 and route to the grpc_app cluster, also using TLS 1.3 and HTTP/2.
Envoy version 1.25.0 is used, downloaded from the official GitHub releases page.
Nginx Configuration
Nginx configuration is concise:
HTTP : listen on port 80 and proxy_pass to the backend.
HTTPS : listen on port 443, enable ssl and http2, provide the same TLS 1.3 certificate/key pair as Envoy.
gRPC : replace proxy_pass with grpc_pass to forward gRPC traffic.
Benchmark Results
HTTP : Envoy sustains ~800‑900 requests per second (RPS) before CPU reaches 100 %. Nginx reaches ~4,000 RPS under default settings.
HTTPS : Envoy caps at ~900 RPS, while Nginx processes ~1,500 RPS. Envoy experienced repeated failures that caused systemd to stop the service after three attempts; a reduced load of 500 virtual users showed comparable latency between the two proxies.
gRPC : Envoy processes more requests with lower CPU usage than Nginx, but both proxies become unstable above ~200 RPS. At 150 virtual users, Envoy’s median latency is about 1 ms lower than Nginx’s.
Conclusions
Envoy offers strong edge‑proxy capabilities and native gRPC support, making it well‑suited for service‑mesh environments such as Istio. However, in raw throughput tests on plain HTTP and HTTPS, Nginx delivers higher request rates and is easier to configure for beginners. For gRPC workloads, Envoy shows better performance and lower CPU consumption.
ITPUB
Official ITPUB account sharing technical insights, community news, and exciting events.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
