Backend Development 8 min read

Nginx Gzip Compression Optimization: Boosting Performance by Over 50×

This article documents a real‑world nginx performance tuning case where adjusting gzip settings and switching to static compression reduced CPU usage dramatically and increased QPS from 5 w to 27 w, achieving more than a fifty‑fold overall performance improvement.

JD Retail Technology
JD Retail Technology
JD Retail Technology
Nginx Gzip Compression Optimization: Boosting Performance by Over 50×

This article records a real‑world nginx tuning process that increased overall server performance by more than 50 times through step‑by‑step analysis and practice.

Background : A critical business requirement demanded staged data display with an estimated peak of 90,000 QPS. The initial instinct as backend engineers was to write robust APIs, employ hierarchical caching, scale machines, and max out threads. Because data update frequency was low, the team chose to generate static files and serve them via CDN.

The architecture flow is illustrated below:

After each data update, new static files are generated and CDN is refreshed, causing a surge of origin requests that force the application servers to handle the full 90k QPS.

First Load Test : Two data centers with 40 machines (4 C each) served 25 KB files at 50k QPS, pushing CPU to 90 %. Adding more machines did not solve the bottleneck, and file size grew to 125 KB, further stressing the system.

The team suspected nginx's gzip compression (enabled to save bandwidth) of consuming excessive CPU.

server
{
listen 80;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain application/css text/css application/xml text/javascript application/javascript application/x-javascript;
...
}

Second Load Test : The gzip compression level was lowered from 6 to 2 to reduce CPU work. CPU still saturated quickly, but QPS barely reached the target 90k, confirming gzip's impact on CPU.

Understanding that nginx is a high‑performance web server, the team investigated why a single static file could overload the application servers.

Third Load Test : After confirming gzip’s CPU cost, the team explored static compression. They generated .gz files locally using GZIPOutputStream and enabled gzip_static on; in nginx.

gzip_static on;

With static compression, 40 machines only used 7 % CPU while handling the 90k QPS load. Pushing further, the QPS rose to 270k, CPU usage dropped to 7 % (from 90 %), and network throughput reached 89 MB/s, delivering more than a 50‑fold performance gain.

Conclusion : Static compression provides overwhelming advantages for immutable files, eliminating per‑request CPU overhead and saving bandwidth. Dynamic compression is suitable for API responses that change frequently, where on‑the‑fly compression is needed. The experience deepened the team’s understanding of nginx’s gzip mechanisms and highlighted the importance of choosing the right compression strategy for different scenarios.

As backend engineers, nginx is a familiar tool for reverse proxy, header manipulation, and load balancing. This case study demonstrates a valuable skill‑expansion opportunity by mastering both dynamic and static gzip compression.

BackendPerformanceOptimizationnginxgzipcompression
JD Retail Technology
Written by

JD Retail Technology

Official platform of JD Retail Technology, delivering insightful R&D news and a deep look into the lives and work of technologists.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.