Boost Web Performance: Master Nginx Static‑Dynamic Separation

This article explains how Nginx can separate static assets from dynamic requests using location rules and reverse‑proxying, provides a complete configuration example, and details the performance gains from zero‑copy file serving, gzip compression, caching headers, and CDN integration.

Mike Chen's Internet Architecture
Mike Chen's Internet Architecture
Mike Chen's Internet Architecture
Boost Web Performance: Master Nginx Static‑Dynamic Separation

What Is Static‑Dynamic Separation?

Static‑dynamic separation is an architecture pattern that routes "static resources" (images, JS, CSS, HTML, fonts, etc.) and "dynamic requests" (JSP, PHP, Python, Go, Java APIs) through different processing paths, allowing each to be handled by the most suitable component.

Nginx’s Role in the Separation Architecture

Nginx acts as a front‑end server and uses location matching rules to decide whether a request is for a static file or a dynamic endpoint. Static files are served directly from the local disk or a distributed storage (CDN, object storage) using the sendfile zero‑copy mechanism, while dynamic requests are proxied to backend application servers such as Tomcat or Spring Boot.

Configuration Example

server {
    listen 80;
    server_name www.bat-arch.com;
    # 1. Dynamic requests: forward to backend application pool
    location /api/ {
        proxy_pass http://java_backend_pool;
        proxy_set_header Host $host;
    }
    # 2. Static resources: regex match and serve from local directory
    location ~* \.(gif|jpg|jpeg|png|css|js|ico)$ {
        root /usr/share/nginx/html/static/;
        expires 30d; # cache for 30 days
        add_header Cache-Control "public, no-transform";
    }
}

How Nginx Determines Request Type

When Nginx receives a request, it examines the URL path or file extension. If the extension matches the static‑file regex, Nginx reads the file directly from disk using sendfile, which avoids user‑space copying and incurs almost no CPU overhead. Otherwise, the request is treated as dynamic and forwarded via proxy_pass to an upstream server pool.

Performance Benefits of Static‑Dynamic Separation

1. **Reduced Backend Load** – Static assets are served by Nginx, eliminating context switches and business‑logic processing on application servers.

2. **Efficient I/O** – Nginx’s asynchronous event‑driven model, zero‑copy sendfile, and built‑in gzip compression lower CPU and memory usage.

3. **Caching** – Proper Cache-Control, ETag, and Expires headers, together with optional proxy_cache, reduce disk and network I/O.

4. **Edge Acceleration** – Combining Nginx caching with CDN offloads a large portion of traffic to edge nodes, dramatically decreasing origin‑server pressure.

In practice, this pattern can yield performance improvements of up to tenfold and is widely adopted by large tech companies such as Alibaba, Tencent, and ByteDance.

Nginx static‑dynamic separation diagram
Nginx static‑dynamic separation diagram
Performance gains illustration
Performance gains illustration
Static assets served directly by Nginx
Static assets served directly by Nginx
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Performance OptimizationcachingNginxReverse Proxystatic assetsdynamic requests
Mike Chen's Internet Architecture
Written by

Mike Chen's Internet Architecture

Over ten years of BAT architecture experience, shared generously!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.