Nginx vs Envoy: Real‑World Performance Benchmark on AWS

This article translates and expands Anton Putra’s Nginx vs. Envoy performance benchmark, detailing the AWS test environment, Terraform and Ansible provisioning, proxy configurations, load‑testing methodology with K6, and the resulting request‑per‑second and latency comparisons across HTTP, HTTPS, and gRPC workloads.

ITPUB
ITPUB
ITPUB
Nginx vs Envoy: Real‑World Performance Benchmark on AWS

This translation summarizes Anton Putra’s benchmark comparing the high‑performance service‑mesh proxy Envoy with the traditional web server Nginx. Both proxies were deployed on AWS EC2 instances using Terraform and Ansible to create a VPC, configure networking, and launch the test machines.

Test Scenarios

Three workloads were exercised:

Plain HTTP requests.

HTTPS with TLS termination at the proxy.

gRPC traffic with TLS termination at the proxy.

The backend service was a Go application built with the fasthttp library and the official gRPC SDK. CPU usage was measured with Prometheus and Node Exporter, request counts were exposed via OpenTelemetry, and latency was generated using the K6 load‑testing tool.

Envoy Configuration

Envoy was deployed either on a VM or as a sidecar in a pod. Its configuration is verbose; key elements include:

Listening on port 80 and forwarding all traffic to the my_app cluster.

For HTTPS, enabling port 443 and adding the transport_socket property to terminate TLS.

Explicitly enabling HTTP/2 and using TLS 1.3 with specified certificates and private keys.

For gRPC, opening port 8443 and routing to the grpc_app cluster.

The latest Envoy binary (v1.25.0) can be downloaded from the GitHub releases page. Ansible handlers were used to restart the service when configuration files changed.

Nginx Configuration

Nginx’s configuration is concise. It listens on port 80 and forwards requests to the backend. For HTTPS, the ssl directive and http2 protocol are enabled, with certificates and TLS 1.3 specified. To proxy gRPC, proxy_pass is replaced with grpc_pass when the upstream does not use TLS.

Benchmark Results

Using K6, the following peak request‑per‑second rates were observed:

HTTP: Envoy ~800‑900 RPS, Nginx ~4,000 RPS.

HTTPS: Envoy ~900 RPS, Nginx ~1,500 RPS.

gRPC: Envoy handled more requests than Nginx with lower CPU usage, though stability degraded above ~200 RPS.

When the load was reduced (e.g., 500 virtual users), latency differences between the two proxies became negligible.

Conclusions

Envoy offers powerful edge‑proxy capabilities and excels at gRPC forwarding, making it suitable for service‑mesh scenarios like Istio. However, for simple HTTP/HTTPS workloads, Nginx delivers higher raw throughput and is friendlier for beginners. The author provides a GitHub repository with the full source code and deployment scripts for reproducibility.

GitHub repository: https://github.com/antonputra/tutorials/tree/main/lessons/151

References

Original benchmark video: https://www.youtube.com/watch?v=0Q9I-x--np4

GitHub repository: https://github.com/antonputra/tutorials/tree/main/lessons/151

Benchmark chart
Benchmark chart
HTTPS benchmark chart
HTTPS benchmark chart
gRPC benchmark chart
gRPC benchmark chart
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Proxyperformance benchmarkAWSEnvoyTerraformAnsible
ITPUB
Written by

ITPUB

Official ITPUB account sharing technical insights, community news, and exciting events.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.