Operations 10 min read

Master Nginx: Backend Engineer’s Guide to Deployment, Load Balancing & Security

This article explains how backend engineers can use Nginx as a versatile deployment tool, covering reverse proxy, load balancing, static file serving, rate limiting, HTTPS encryption, and quick installation steps to keep services stable and performant.

Open Source Linux
Open Source Linux
Open Source Linux
Master Nginx: Backend Engineer’s Guide to Deployment, Load Balancing & Security

Nginx is a high‑performance reverse proxy server that can handle reverse proxy, load balancing, static file serving, rate limiting, HTTPS encryption, and more, making it an essential tool for backend engineers.

1. What is Nginx?

Simply put, Nginx acts like a security guard at the entrance of your company, forwarding external requests to your backend services.

External requests first pass through Nginx, then are proxied to your backend (reverse proxy).

It can handle tens of thousands of concurrent connections, far surpassing single‑threaded servers like Tomcat.

It also serves static files, compresses data, and protects against malicious attacks.

Example: hide the real IP of your e‑commerce service behind Nginx so attackers cannot target the backend directly.

2. Scenario 1: Reverse Proxy & Load Balancing (essential for high concurrency)

Goal: evenly distribute traffic to three backend servers, hide the real IP, and automatically remove failed nodes.

# Global configuration
user  nginx;  # run user
worker_processes  1;  # usually set to CPU cores
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

# Load‑balancing configuration
upstream backend_servers {
    server 192.168.1.10:8080;  # backend A
    server 192.168.1.11:8080;  # backend B
    server 192.168.1.12:8080;  # backend C
    least_conn;  # send to the server with the fewest connections
    keepalive 32;  # keep 32 persistent connections
    proxy_next_upstream error timeout http_500;  # retry on failure
}

server {
    listen       80;
    server_name  www.yourdomain.com;
    location /api/ {
        proxy_pass http://backend_servers/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_connect_timeout 30s;
        proxy_read_timeout 60s;
        proxy_send_timeout 60s;
    }
}

Key points:

Backend IPs are hidden behind Nginx’s public IP.

During traffic spikes, requests are evenly spread across three servers, preventing any single server from crashing.

3. Scenario 2: Static Resource Handling & Separation (speed up page loads)

Goal: let Nginx serve images, CSS, and JavaScript directly, reducing backend load.

server {
    listen       80;
    server_name  www.yourdomain.com;
    location /static/ {
        root /data/;
        autoindex off;
        expires 30d;
        gzip on;
        gzip_types text/css application/javascript image/png;
    }
    location /images/ {
        root /data/;
        valid_referers none blocked www.yourdomain.com;
        if ($invalid_referer) {
            return 403;  # block hotlinking
        }
    }
    location /api/ {
        proxy_pass http://backend_servers/;
    }
}

Key points:

Static files are served 10× faster by Nginx.

Browser caching and compression make subsequent visits load instantly.

4. Scenario 3: Rate Limiting & IP Black/White List (prevent malicious attacks)

Goal: limit concurrent connections and request frequency per IP, and block malicious IPs.

# Define rate‑limiting policies (inside the http block)
http {
    limit_conn_zone $binary_remote_addr zone=ip_conn:10m;  # max 10 concurrent connections per IP
    limit_req_zone $binary_remote_addr zone=ip_req:10m rate=5r/s;  # max 5 requests per second per IP
    set $allow_ip "192.168.1.0/24";  # allowed internal range
    deny 10.0.0.1;  # explicitly block this IP
}

server {
    listen 80;
    server_name www.yourdomain.com;
    location /api/login {
        limit_conn ip_conn 10;
        limit_req zone=ip_req burst=10 nodelay;
        if ($remote_addr !~* $allow_ip) {
            return 403;  # block unauthorized IPs
        }
        proxy_pass http://backend_servers/;
    }
}

Key points:

Malicious IPs are blocked, keeping logs clean.

Login API is protected from CC attacks.

5. Scenario 4: HTTPS Configuration (encrypted data transmission)

Goal: enable TLS so browsers show the green lock.

server {
    listen       443 ssl;
    server_name  www.yourdomain.com;
    ssl_certificate      /etc/nginx/ssl/yourdomain.crt;
    ssl_certificate_key  /etc/nginx/ssl/yourdomain.key;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers on;
    rewrite ^(.*)$ https://$host$1 permanent;
    location / {
        proxy_pass http://backend_servers/;
    }
}

Key points:

The green lock satisfies product managers.

Data is encrypted, preventing man‑in‑the‑middle attacks.

6. How to Get Nginx Running (quick deployment)

Installation:

yum install nginx      # CentOS
apt-get install nginx  # Ubuntu
# Windows: download the official package and run nginx.exe

Start / restart:

sudo systemctl start nginx      # start
sudo systemctl restart nginx   # restart after config changes

Check configuration:

nginx -t  # fix errors before starting

Conclusion

Reverse proxy: hide backend IPs, keep services safe.

Load balancing: distribute traffic, avoid server overload.

Static resources: let Nginx serve files, let the backend focus on APIs.

Rate limiting: block malicious requests, keep logs clean.

HTTPS: encrypted transmission, green lock for users.

Remember, Nginx configuration is not a one‑time task; adjust limits and rules based on traffic and user feedback to maintain a stable, “dog‑steady” service.

reverse proxyRate LimitingHTTPSStatic files
Open Source Linux
Written by

Open Source Linux

Focused on sharing Linux/Unix content, covering fundamentals, system development, network programming, automation/operations, cloud computing, and related professional knowledge.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.