Operations 5 min read

How Nginx Caching Can Boost Performance by 10× – Deep Dive

This article explains the fundamentals of Nginx proxy caching, walks through the request‑cache workflow, and provides concrete configuration examples and best‑practice tips that can increase backend throughput and response speed by up to ten times.

Architect Chen
Architect Chen
Architect Chen
How Nginx Caching Can Boost Performance by 10× – Deep Dive

Why Nginx Caching Matters

Nginx is a high‑performance reverse proxy and load balancer, and its caching mechanism is a key lever for reducing backend load and accelerating response times. Properly tuned proxy_cache and fastcgi_cache can deliver up to a ten‑fold performance boost.

Cache Workflow

The caching process follows a simple loop: a request arrives, Nginx computes a cache key (typically based on host, URI, query string, and relevant headers), then checks the cache. If the key hits and the entry is fresh, Nginx returns the cached response directly; otherwise it forwards the request to the upstream server, receives the full response, writes it to the cache according to the configured policy, and finally returns it to the client.

Nginx cache workflow diagram
Nginx cache workflow diagram

Key Configuration Directives

proxy_cache_path – defines the cache storage location, size limits, key storage format, and eviction policy (inactive, max_size).

proxy_cache_key – controls the granularity of cached objects; a well‑designed key is essential for a high hit rate.

proxy_cache_valid – sets the lifetime of cached responses per status code (e.g., proxy_cache_valid 200 302 10m; caches 200/302 responses for ten minutes).

proxy_no_cache and proxy_cache_bypass – exclude requests that contain specific headers or query parameters from being cached.

add_header X-Cache‑Status $upstream_cache_status – adds a debugging header that shows HIT, MISS, EXPIRED, or BYPASS.

Sample Configuration

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://backend;
        # Enable caching
        proxy_cache mycache;
        proxy_cache_key $cache_key;               # custom key
        proxy_cache_valid 200 302 10m;            # cache 200/302 for 10 minutes
        proxy_cache_valid 404 1m;                 # cache 404 for 1 minute
        proxy_cache_valid any 5m;                # default 5‑minute cache
        # Do not cache when these conditions are present
        proxy_no_cache $http_pragma $http_authorization $arg_nocache;
        proxy_cache_bypass $http_pragma $http_authorization $arg_nocache;
        # Debugging header
        add_header X-Cache-Status $upstream_cache_status;
    }
}

Best Practices for 10× Performance

Aim for a cache hit rate of 80 % + by designing an appropriate proxy_cache_key that captures all variables that affect response uniqueness.

Separate static and dynamic resources: give static files long TTLs, while applying short TTLs or no caching to dynamic content.

Combine Nginx caching with CDN layers for multi‑level caching and further latency reduction.

Use expires directives (e.g., expires 1d;) for client‑side caching of immutable assets.

Proxy Cache
Architect Chen
Written by

Architect Chen

Sharing over a decade of architecture experience from Baidu, Alibaba, and Tencent.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.