Operations 16 min read

Master Nginx: Reverse Proxy, Load Balancing, and HTTPS Configuration Guide

This article provides a comprehensive overview of Nginx, covering its role as a lightweight web and reverse proxy server, essential command-line controls, practical configuration examples for basic reverse proxy, load balancing, multi‑webapp routing, static site serving, file server setup, HTTPS, and CORS handling.

Efficient Ops
Efficient Ops
Efficient Ops
Master Nginx: Reverse Proxy, Load Balancing, and HTTPS Configuration Guide

Overview

Nginx (engine x) is a lightweight web server, reverse proxy server, and mail (IMAP/POP3) proxy.

What is a Reverse Proxy?

A reverse proxy receives client requests from the Internet, forwards them to internal servers, and returns the server responses to the clients, appearing to the outside world as a single server.

Usage

Common Nginx commands:

<code>nginx -s stop       # Fast stop Nginx, may not save state and terminates immediately
nginx -s quit       # Graceful stop, saves state and shuts down orderly
nginx -s reload     # Reload configuration after changes
nginx -s reopen     # Reopen log files
nginx -c filename   # Use a specific configuration file instead of the default
nginx -t            # Test configuration syntax without starting the server
nginx -v            # Show Nginx version
nginx -V            # Show Nginx version, compiler version, and configure parameters</code>

For convenience on Windows you can create a

startup.bat

batch file:

<code>@echo off
rem If Nginx is already running and a PID file exists, kill the process
nginx.exe -s stop
rem Test configuration syntax
nginx.exe -t -c conf/nginx.conf
rem Show version information
nginx.exe -v
rem Start Nginx with a specific configuration
nginx.exe -c conf/nginx.conf</code>

On Linux a similar shell script can be used.

Nginx Configuration Practice

Example of a simple HTTP reverse proxy (no complex settings):

<code># user nobody;  # optional: run as a specific user
worker_processes 1;
error_log logs/error.log;
pid logs/nginx.pid;

events {
    worker_connections 1024;
}

http {
    include mime.types;
    default_type application/octet-stream;
    log_format main '[${remote_addr}] - [${remote_user}] [${time_local}] "${request}" ${status} ${body_bytes_sent} "${http_referer}" "${http_user_agent}" "${http_x_forwarded_for}"';
    access_log logs/access.log main;
    sendfile on;
    keepalive_timeout 120;
    tcp_nodelay on;

    upstream zp_server1 {
        server 127.0.0.1:8089;
    }

    server {
        listen 80;
        server_name www.helloworld.com;
        index index.html;
        root D:/01_Workspace/Project/github/zp/SpringNotes/spring-security/spring-shiro/src/main/webapp;
        charset utf-8;

        # Basic reverse proxy
        location / {
            proxy_pass http://zp_server1;
        }

        # Serve static files directly
        location ~ ^/(images|javascript|js|css|flash|media|static)/ {
            root D:/01_Workspace/Project/github/zp/SpringNotes/spring-security/spring-shiro/src/main/webapp/views;
            expires 30d;
        }

        # Status page
        location /NginxStatus {
            stub_status on;
            access_log on;
            auth_basic "NginxStatus";
            auth_basic_user_file conf/htpasswd;
        }

        # Deny access to hidden files
        location ~ /\.ht {
            deny all;
        }
    }
}
</code>

Load Balancing Configuration

When multiple backend servers are available, Nginx can distribute traffic using weighted load balancing:

<code>http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;
    access_log /var/log/nginx/access.log;

    upstream load_balance_server {
        server 192.168.1.11:80 weight=5;
        server 192.168.1.12:80 weight=1;
        server 192.168.1.13:80 weight=6;
    }

    server {
        listen 80;
        server_name www.helloworld.com;
        location / {
            root /root;
            index index.html index.htm;
            proxy_pass http://load_balance_server;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $remote_addr;
            proxy_connect_timeout 90;
            proxy_send_timeout 90;
            proxy_read_timeout 90;
            proxy_buffer_size 4k;
            proxy_buffers 4 32k;
            proxy_busy_buffers_size 64k;
            proxy_temp_file_write_size 64k;
            client_max_body_size 10m;
            client_body_buffer_size 128k;
        }
    }
}
</code>

Multiple Webapp Configuration

When a site hosts several independent web applications (e.g., finance, product, admin), each can run on a different port and be exposed through Nginx using context paths:

<code>http {
    upstream product_server { server www.helloworld.com:8081; }
    upstream admin_server   { server www.helloworld.com:8082; }
    upstream finance_server { server www.helloworld.com:8083; }

    server {
        # default to product
        location / { proxy_pass http://product_server; }
        location /product/ { proxy_pass http://product_server; }
        location /admin/   { proxy_pass http://admin_server; }
        location /finance/ { proxy_pass http://finance_server; }
    }
}
</code>

HTTPS Reverse Proxy Configuration

For sites requiring secure communication, configure Nginx to listen on port 443 with SSL certificates:

<code>server {
    listen 443 ssl;
    server_name www.helloworld.com;
    ssl_certificate     cert.pem;
    ssl_certificate_key cert.key;
    ssl_session_cache   shared:SSL:1m;
    ssl_session_timeout 5m;
    ssl_ciphers         HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;

    location / {
        root /root;
        index index.html index.htm;
    }
}
</code>

Static Site Configuration

To serve a static website (HTML and assets) from a directory:

<code>worker_processes 1;

events { worker_connections 1024; }

http {
    include mime.types;
    default_type application/octet-stream;
    sendfile on;
    keepalive_timeout 65;
    gzip on;
    gzip_types text/plain application/x-javascript text/css application/xml text/javascript application/javascript image/jpeg image/gif image/png;
    gzip_vary on;

    server {
        listen 80;
        server_name static.zp.cn;
        location / {
            root /app/dist;
            index index.html;
            # redirect any request to index.html if needed
        }
    }
}
</code>

Add a host entry

127.0.0.1 static.zp.cn

and access the site via a browser.

File Server Setup

For a simple file server with directory listing:

<code>autoindex on;               # show directory listing
autoindex_exact_size on;   # show file sizes
autoindex_localtime on;     # show modification times

server {
    charset utf-8,gbk;
    listen 9050 default_server;
    listen [::]:9050 default_server;
    server_name _;
    root /share/fs;
}
</code>

CORS Solution

When front‑end and back‑end applications run on different ports, browsers block cross‑origin requests. Nginx can add the necessary CORS headers:

<code># enable-cors.conf
set $ACAO '*';
if ($http_origin ~* (www\.helloworld\.com)$) { set $ACAO $http_origin; }
if ($cors = "trueget") {
    add_header 'Access-Control-Allow-Origin' "$http_origin";
    add_header 'Access-Control-Allow-Credentials' 'true';
    add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
    add_header 'Access-Control-Allow-Headers' 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
}
if ($request_method = 'OPTIONS') { set $cors "${cors}options"; }
if ($request_method = 'GET')     { set $cors "${cors}get"; }
if ($request_method = 'POST')    { set $cors "${cors}post"; }
</code>

Include the CORS fragment in the server block handling API requests:

<code>upstream front_server { server www.helloworld.com:9000; }
upstream api_server   { server www.helloworld.com:8080; }

server {
    listen 80;
    server_name www.helloworld.com;

    location ~ ^/api/ {
        include enable-cors.conf;
        proxy_pass http://api_server;
        rewrite "^/api/(.*)$" /$1 break;
    }

    location / {
        proxy_pass http://front_server;
    }
}
</code>

With these configurations, Nginx can serve as a reverse proxy, load balancer, static file server, HTTPS endpoint, and CORS gateway.

Load BalancingconfigurationNginxreverse proxyweb serverHTTPS
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.