Mastering Nginx Proxy and Load Balancing: Configurations and Best Practices

This article explains how to configure Nginx’s proxy features and various load‑balancing algorithms, covering error handling, request methods, timeout settings, header forwarding, upstream definitions, and practical examples to ensure reliable and efficient traffic distribution across multiple backend servers.

Linux Cloud Computing Practice
Linux Cloud Computing Practice
Linux Cloud Computing Practice
Mastering Nginx Proxy and Load Balancing: Configurations and Best Practices

Introduction

Nginx’s proxy and load‑balancing capabilities are among its most frequently used features. After covering basic syntax in a previous article, this guide dives straight into proxy configuration and then details load‑balancing setups.

Nginx Proxy Configuration

1. To redirect 404 errors to an external page, add:

error_page 404 https://www.baidu.com; # error page

This alone does not work; you must also enable error interception:

proxy_intercept_errors on; # activates error_page when upstream returns 4xx/5xx

2. Restrict proxy to specific request methods (GET, POST):

proxy_method get; # supports GET (POST can be added similarly)

3. Set the HTTP protocol version used by the proxy:

proxy_http_version 1.0; # can be 1.0 or 1.1, default is 1.0

4. When one backend server becomes unavailable, Nginx may still route requests to it, causing long client wait times. The following timeout settings mitigate this issue:

proxy_connect_timeout 1;   # time to establish connection to upstream (default 60s)
proxy_read_timeout    1;   # time to wait for a response from upstream (default 60s)
proxy_send_timeout    1;   # time to wait for upstream to accept request data (default 60s)
proxy_ignore_client_abort on; # terminate upstream request if client disconnects

5. Using the upstream directive, you can define a group of backend servers and specify how Nginx handles failures:

proxy_next_upstream timeout; # on timeout, try next server (options: error|timeout|invalid_header|http_500|...|off)

6. To obtain the real client IP instead of the proxy’s IP, set the following headers:

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

7. Example of a typical proxy configuration block (excerpt):

include       mime.types;
default_type  application/octet-stream;
log_format myFormat '$remote_addr-$remote_user [$time_local] $request $status $body_bytes_sent $http_referer $http_user_agent $http_x_forwarded_for';
access_log    log/access.log myFormat;
sendfile       on;
keepalive_timeout 65;
proxy_connect_timeout 1;
proxy_read_timeout    1;
proxy_send_timeout    1;
proxy_http_version 1.0;
#proxy_method get;
proxy_ignore_client_abort on;
proxy_ignore_headers "Expires" "Set-Cookie";
proxy_intercept_errors on;
proxy_headers_hash_max_size 1024;
proxy_headers_hash_bucket_size 128;
proxy_next_upstream timeout;

Nginx Load Balancing Details

The upstream block defines a set of backend servers and the load‑balancing algorithm. Two syntax styles are shown:

upstream mysvr {
    server 192.168.10.121:3333;
    server 192.168.10.122:3333;
}

server {
    ...
    location / {
        proxy_pass http://mysvr;
    }
}

or using full URLs:

upstream mysvr {
    server http://192.168.10.121:3333;
    server http://192.168.10.122:3333;
}

server {
    ...
    location / {
        proxy_pass mysvr;
    }
}

Common load‑balancing algorithms:

Round‑robin (default): distributes requests evenly (ABABAB...).

Weighted round‑robin: servers receive requests proportionally to their weight.

IP hash: the same client IP is consistently routed to the same server.

Backup: a standby server receives traffic only when all primary servers fail.

Example of weighted round‑robin:

upstream mysvr {
    server 127.0.0.1:7878 weight=1;
    server 192.168.10.121:3333 weight=2;
}

Example of backup server configuration:

upstream mysvr {
    server 127.0.0.1:7878;
    server 192.168.10.121:3333 backup; # hot standby
}

Key status parameters for upstream servers:

down : temporarily removes the server from the pool.

backup : used only when all non‑backup servers are unavailable.

max_fails : number of failed attempts before the server is considered down (default 1).

fail_timeout : period to keep the server in the down state after reaching max_fails.

Advanced example combining weights, failure limits, and timeouts:

upstream mysvr {
    server 127.0.0.1:7878 weight=2 max_fails=2 fail_timeout=2;
    server 192.168.10.121:3333 weight=1 max_fails=2 fail_timeout=1;
}

These configurations demonstrate that Nginx’s built‑in load‑balancing algorithms are powerful yet straightforward. For deeper exploration, consult the official Nginx documentation and available third‑party modules.

Load balancing issue illustration
Load balancing issue illustration
proxyupstream
Linux Cloud Computing Practice
Written by

Linux Cloud Computing Practice

Welcome to Linux Cloud Computing Practice. We offer high-quality articles on Linux, cloud computing, DevOps, networking and related topics. Dive in and start your Linux cloud computing journey!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.