Master Nginx: Reverse Proxy, Load Balancing, Static Assets, Rate Limiting, and HTTPS

This guide walks backend engineers through configuring Nginx for reverse proxy, load balancing, static‑resource handling, rate limiting with IP blacklists, and HTTPS encryption, providing ready‑to‑use code snippets and practical deployment steps to keep services stable and secure.

macrozheng
macrozheng
macrozheng
Master Nginx: Reverse Proxy, Load Balancing, Static Assets, Rate Limiting, and HTTPS

What is Nginx?

Nginx is a high‑performance reverse‑proxy server that sits in front of your backend services, handling millions of concurrent connections, serving static files, compressing data, and protecting against malicious traffic.

External requests first pass through Nginx, which then forwards them to the appropriate backend (reverse proxy).

It can handle tens of thousands of concurrent connections, far outperforming single‑threaded servers like Tomcat.

It also serves static files, compresses responses, and acts as a security guard.

Scenario 1: Reverse Proxy & Load Balancing

Goal: Distribute traffic evenly across three backend servers, hide their real IPs, and automatically remove unhealthy nodes.

user  nginx;
worker_processes  1;
error_log  /var/log/nginx/error.log warn;
pid       /var/run/nginx.pid;

upstream backend_servers {
    server 192.168.1.10:8080;  # Backend A
    server 192.168.1.11:8080;  # Backend B
    server 192.168.1.12:8080;  # Backend C
    least_conn;                # Send request to the server with fewest connections
    keepalive 32;              # Keep 32 persistent connections
    proxy_next_upstream error timeout http_500;
}

server {
    listen       80;
    server_name  www.yourdomain.com;

    location /api/ {
        proxy_pass http://backend_servers/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_connect_timeout 30s;
        proxy_read_timeout 60s;
        proxy_send_timeout 60s;
    }
}

All external traffic reaches Nginx, which forwards requests to the three backend servers.

During high‑traffic events, load is spread evenly, preventing any single server from being overwhelmed.

Scenario 2: Static Resource Handling & Separation

Goal: Let Nginx serve images, CSS, and JavaScript directly, reducing backend load and improving page‑load speed.

server {
    listen       80;
    server_name  www.yourdomain.com;

    # Static assets
    location /static/ {
        root /data/;               # Files are under /data/static/
        autoindex off;
        expires 30d;
        gzip on;
        gzip_types text/css application/javascript image/png;
    }

    # Images with hotlink protection
    location /images/ {
        root /data/;
        valid_referers none blocked www.yourdomain.com;
        if ($invalid_referer) { return 403; }
    }

    # Dynamic API requests still go to backend
    location /api/ {
        proxy_pass http://backend_servers/;
    }
}

Static files are served directly by Nginx, delivering content up to ten times faster than routing through the backend.

Browser caching and gzip compression make subsequent loads almost instantaneous.

Scenario 3: Rate Limiting & IP Blacklist

Goal: Limit concurrent connections and request rate per IP, and block malicious IPs from accessing the login endpoint.

# Define limits in the http block
http {
    limit_conn_zone $binary_remote_addr zone=ip_conn:10m;   # Track connections per IP
    limit_req_zone  $binary_remote_addr zone=ip_req:10m rate=5r/s; # 5 requests per second per IP
    set $allow_ip "192.168.1.0/24";                     # Whitelisted subnet
    deny 10.0.0.1;                                      # Explicitly deny this IP
}

server {
    listen 80;
    server_name www.yourdomain.com;

    location /api/login {
        limit_conn ip_conn 10;               # Max 10 concurrent connections per IP
        limit_req zone=ip_req burst=10 nodelay; # Allow short bursts, then reject
        if ($remote_addr !~* $allow_ip) { return 403; }
        proxy_pass http://backend_servers/;
    }
}

Malicious IPs that flood the API are blocked with a 403 response, keeping logs clean.

Rate limiting on the login endpoint prevents credential‑stuffing and CC attacks.

Scenario 4: HTTPS Configuration

Goal: Enable TLS encryption so browsers show the green lock and protect data in transit.

server {
    listen       443 ssl;
    server_name  www.yourdomain.com;
    ssl_certificate     /etc/nginx/ssl/yourdomain.crt;
    ssl_certificate_key /etc/nginx/ssl/yourdomain.key;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers on;
    # Redirect HTTP to HTTPS
    rewrite ^(.*)$ https://$host$1 permanent;
    location / {
        proxy_pass http://backend_servers/;
    }
}

The browser displays a green padlock, satisfying security requirements.

All traffic is encrypted, preventing man‑in‑the‑middle attacks on credentials.

Deploying Nginx

1. Install Nginx

Linux: yum install nginx (CentOS) or apt-get install nginx (Ubuntu)

Windows: Download the official zip, extract, and run

nginx.exe

2. Start / Restart

sudo systemctl start nginx   # start
sudo systemctl restart nginx # reload after config changes

3. Test Configuration

nginx -t   # verify syntax; fix errors before starting

Conclusion

Reverse Proxy: Hide backend IPs and protect services.

Load Balancing: Evenly distribute traffic to avoid overload.

Static Resources: Let Nginx serve images, CSS, and JS for faster response.

Rate Limiting & Blacklist: Block abusive requests and keep logs clean.

HTTPS: Encrypt traffic and gain user trust.

After deployment, monitor server load and user feedback; adjust limits or add new rules as traffic patterns change.

Load BalancingNginxreverse proxyrate limitingHTTPSstatic assetsbackend deployment
macrozheng
Written by

macrozheng

Dedicated to Java tech sharing and dissecting top open-source projects. Topics include Spring Boot, Spring Cloud, Docker, Kubernetes and more. Author’s GitHub project “mall” has 50K+ stars.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.