How Nginx Static‑Dynamic Separation Boosts Web Performance
This article explains the principle of Nginx static‑dynamic separation, describes its layered architecture, request routing flow, and key optimization techniques such as caching, compression, load balancing, security limits, and monitoring to dramatically improve overall system performance.
What Is Nginx Static‑Dynamic Separation?
Nginx static‑dynamic separation (Dynamic‑Static Separation) is a web architecture optimization strategy that classifies website resources into static assets (images, CSS, JavaScript, HTML) and dynamic assets (API endpoints, personalized pages generated by backend servers such as Tomcat or Spring Boot).
The core goal is to leverage Nginx’s high‑performance static file handling to offload static requests from backend application servers, thereby significantly improving overall system throughput.
Architecture and Request Flow
The architecture follows a “frontend Nginx + backend application server” layered design, using Nginx’s location module to route requests based on path, file type, or business logic. The overall flow can be visualized as:
用户请求 → Nginx (listen 80) ↓ [分类路由] ↓ 静态? → 是 → 本地/缓存/CDN 返回 (e.g., /img/logo.png) ↓ 否 动态? → 代理 upstream (e.g., /api/user → Tomcat:8080) ↓ Tomcat 生成响应 → Nginx 返回In the front‑end layer, Nginx acts as a reverse proxy and static server, listening on ports 80/443, classifying requests, and serving static files directly from disk, cache, or CDN with response times often under 10 ms.
Dynamic requests are proxied to an upstream pool of application servers (e.g., multiple Tomcat instances) where load balancing (round‑robin, weight, IP_HASH) distributes traffic.
Extended Layers
Beyond the basic front‑end/back‑end split, additional layers can further improve performance:
Cache layer: Nginx proxy_cache caches dynamic responses, achieving cache hit rates up to 90 %.
CDN layer: Static assets are offloaded to a CDN; Nginx proxies CDN URLs to reduce origin server load.
Database/Cache layer: Backend services connect to Redis/MySQL for personalized data processing.
Key Optimization Points
Cache optimization: Enable proxy_cache for dynamic APIs; high hit rates can reduce backend QPS by up to 80 %.
Compression: Use Gzip for static files to halve transfer size, beneficial for bandwidth‑constrained environments.
Load balancing: Configure upstream with IP_HASH for session stickiness and keepalive to reuse connections, boosting throughput by roughly 20 %.
Security & performance: Apply limit_req to throttle abusive API calls and disable access_log for static paths to reduce logging overhead.
Monitoring: Integrate Prometheus + Grafana to track QPS, cache hit rate, etc.; use load‑testing tools like ab or wrk for performance validation.
By combining these techniques, Nginx can serve static content with sub‑10 ms latency while efficiently proxying dynamic requests, resulting in a highly scalable and performant web architecture.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Architect Chen
Sharing over a decade of architecture experience from Baidu, Alibaba, and Tencent.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
