How to Implement Nginx Static‑Dynamic Separation for Faster Web Performance

This guide explains the concept of Nginx static‑dynamic separation, shows how to configure static resource caching and dynamic request routing with upstream groups, and outlines a phased migration plan to improve performance and scalability of web services.

Architect Chen
Architect Chen
Architect Chen
How to Implement Nginx Static‑Dynamic Separation for Faster Web Performance

Nginx is a core component in large‑scale architectures. Static‑dynamic separation means handling static assets (HTML, CSS, JavaScript, images, fonts, video, etc.) separately from dynamic requests that require server‑side rendering, database interaction, or session handling.

Static Resource Architecture

Nginx serves static files directly, leveraging kernel cache, sendfile, and asynchronous I/O to minimize context switches and system calls. Static assets are typically placed in a dedicated directory or served from a separate domain/sub‑domain.

Static‑Dynamic Separation Diagram
Static‑Dynamic Separation Diagram

Example location block for caching common static file types:

location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff|woff2|eot|svg|webp|html)$ {
    expires 30d;
    add_header Cache-Control "public";
}

Static caching strategies combine browser cache, Nginx proxy cache, and CDN cache. Use long‑term caching for static files and short‑term (5–60 seconds) or versioned caching for dynamic API responses to avoid data inconsistency.

Dynamic Request Routing Architecture

Dynamic requests are proxied to backend application servers (e.g., Tomcat, Gunicorn, Node.js) via an upstream definition. Load‑balancing methods such as round‑robin, weighted, least‑connections, or IP‑hash can be applied.

Dynamic Request Flow Diagram
Dynamic Request Flow Diagram

Sample upstream and location configuration for dynamic APIs:

upstream backend_upstream {
    server app1.internal:8080;
    server app2.internal:8080;
    server app3.internal:8080;
}

location /api/ {
    proxy_pass http://backend_upstream;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_read_timeout 60;
    proxy_connect_timeout 5;
}

Dynamic content caching should be used cautiously; only cache safe endpoints with short TTL or segment caching to maintain data consistency.

Phased Migration Steps

Separate static resource domain and apply optimized caching policies.

Introduce backend upstream clusters with health checks.

Finally, integrate a CDN for further performance gains.

By following these steps, static assets are served efficiently while dynamic requests are balanced across application servers, resulting in reduced resource contention and lower response latency.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

load balancingNginxreverse proxystatic-dynamic separation
Architect Chen
Written by

Architect Chen

Sharing over a decade of architecture experience from Baidu, Alibaba, and Tencent.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.