Boost Web Performance 10× with Nginx Caching: A Step‑by‑Step Guide

Learn how Nginx’s powerful caching mechanism can slash response times by up to tenfold, reduce backend load, and increase throughput, with detailed explanations of the caching workflow, essential configuration directives, micro‑cache tricks, and FastCGI caching examples for high‑traffic services.

Mike Chen's Internet Architecture
Mike Chen's Internet Architecture
Mike Chen's Internet Architecture
Boost Web Performance 10× with Nginx Caching: A Step‑by‑Step Guide

Why Nginx caching matters

In high‑concurrency architectures, Nginx acts as both traffic gateway and performance optimizer. Its built‑in cache can reduce backend pressure and accelerate responses by more than ten times.

Cache concept

Nginx stores backend responses on local disk or memory. Subsequent identical requests are served directly from the cache without contacting the origin server.

Nginx cache principle illustration
Nginx cache principle illustration

Cache workflow

The process follows four stages: request interception → cache lookup → content response → status write‑back.

Key configuration directives

Four main configuration blocks are required.

1. Define a cache zone

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache_zone:100m max_size=10g inactive=60m use_temp_path=off;

Parameters: levels=1:2 – multi‑level directory structure to avoid too many files in one folder. keys_zone=cache_zone:100m – reserve 100 MB of shared memory for cache metadata. max_size=10g – limit total disk usage. inactive=60m – purge files not accessed for 60 minutes. use_temp_path=off – write directly to the cache directory for better I/O.

2. Enable proxy cache

location /api/ {
    proxy_pass http://backend;
    proxy_cache cache_zone;
    proxy_cache_key $scheme$proxy_host$request_uri;
    proxy_cache_valid 200 302 10m;
    proxy_cache_valid 404 1m;
    add_header X-Cache-Status $upstream_cache_status;
}

When a request matches the location, Nginx checks whether caching is enabled, generates a cache key, and either serves a cached file (HIT) or forwards the request to the backend (MISS). After a MISS, the response is stored in the cache for future hits.

3. Enable micro‑cache for hot endpoints

location /hot/ {
    proxy_pass http://backend;
    proxy_cache cache_zone;
    proxy_cache_valid 200 2s;
    proxy_cache_lock on;
}

This short‑lived cache (e.g., 2 seconds) protects backend services during traffic spikes such as flash sales or trending news.

4. Enable FastCGI cache for dynamic content

fastcgi_cache_path /var/cache/nginx/fastcgi levels=1:2 keys_zone=php_cache:100m inactive=30m;
location ~ \.php$ {
    fastcgi_pass unix:/run/php8-fpm.sock;
    fastcgi_cache php_cache;
    fastcgi_cache_valid 200 10m;
    add_header X-FastCGI-Cache $upstream_cache_status;
}

FastCGI caching accelerates PHP (or other FastCGI) responses by storing generated HTML in the cache.

Practical results

Applying the above settings reduced a hotspot API’s latency from 250 ms to 25 ms, lowered backend QPS by 90 %, and increased overall throughput by roughly ten times. Home‑page load time dropped from 400 ms to 30 ms, a 13‑fold improvement.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

configurationcachingNginxProxy CacheBackend Performancefastcgi
Mike Chen's Internet Architecture
Written by

Mike Chen's Internet Architecture

Over ten years of BAT architecture experience, shared generously!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.