Mastering Nginx Static‑Dynamic Separation: Principles, Architecture & Config
This article explains how Nginx static‑dynamic separation works, why it boosts performance, the core design principles, typical deployment architectures, and provides a complete configuration example with caching and rate‑limiting to dramatically reduce backend load.
Nginx Static‑Dynamic Separation Overview
Nginx static‑dynamic separation is one of the most classic high‑performance optimization techniques for web architectures. It distinguishes between dynamic requests that require backend application processing (e.g., .php, .jsp, .do) and static resources such as HTML, CSS, JS, images, and videos, which Nginx can serve directly.
Why Use Separation?
Static files account for 70%‑90% of web traffic; Nginx serves them more than ten times faster than typical application servers.
Reduces concurrent connections and CPU load on backend servers (Tomcat, PHP‑FPM, Node.js, etc.).
Supports expires for browser caching and gzip compression, further lowering bandwidth and response time.
Core Principle
Nginx uses location rules to match static file extensions or paths. When a request matches, Nginx returns the file from the local filesystem, a cache, or a dedicated static server. Non‑matching requests are reverse‑proxied to the appropriate backend application.
Key Benefits
Leverages Nginx’s strength in high‑concurrency static file distribution.
Allows backend services to focus solely on business logic.
Eliminates invalid requests and unnecessary connection usage on the backend.
Typical Architectures
Two common patterns are used:
Single Nginx split architecture : One Nginx instance handles both static files (stored locally) and forwards dynamic requests to backend applications.
Independent static resource architecture : Static assets are placed on a dedicated static server or CDN, while Nginx acts only as a unified entry point and dynamic request forwarder.
Configuration Example
The following Nginx configuration demonstrates cache definition, static file handling, and dynamic request proxying with rate limiting.
http {
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=1g inactive=60m use_temp_path=off;
server {
listen 80;
server_name example.com;
# Serve static resources
location ~* \.(css|js|png|jpg|gif|ico|svg)$ {
root /var/www/static;
expires max;
proxy_cache my_cache;
proxy_cache_valid 200 302 1h;
}
# Proxy dynamic requests
location / {
proxy_pass http://backend;
limit_req zone=one burst=20 nodelay; # simple rate‑limit example
}
}
}This snippet configures a shared cache, defines a location block that matches common static file extensions, sets aggressive caching headers, and forwards all other traffic to a backend pool while applying a basic request‑rate limit.
Performance Impact
Properly applying static‑dynamic separation, expires, and proxy_cache can reduce backend pressure by more than 70%, turning a “working” Nginx deployment into a high‑performance gateway.
Architect Chen
Sharing over a decade of architecture experience from Baidu, Alibaba, and Tencent.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
