Operations 16 min read

Master Nginx Load Balancing: From Concepts to Full Configuration

This article explains Nginx's role as an HTTP, reverse‑proxy, and mail server, introduces load‑balancing concepts, details the upstream and proxy modules, compares scheduling algorithms, and provides step‑by‑step configuration examples with code snippets to set up a functional load‑balancing environment.

Open Source Linux
Open Source Linux
Open Source Linux
Master Nginx Load Balancing: From Concepts to Full Configuration

Nginx Series Overview

Nginx is a high‑performance HTTP server / reverse‑proxy server and mail (IMAP/POP3) proxy. Official tests show it can handle 50,000 concurrent connections with very low CPU and memory usage, making it extremely stable.

Load Balancing

Load balancing distributes traffic across multiple servers to achieve high performance and high availability. It requires multiple servers to share the load, preventing a single overloaded server from causing failures.

1. Nginx Load Balancing Introduction

Reverse Proxy vs Load Balancing

Strictly speaking, Nginx is used as an Nginx Proxy reverse proxy. Because this reverse‑proxy feature provides load‑balancing effects, we refer to it as Nginx load balancing. What is the difference between reverse proxy and load balancing?

Traditional load‑balancing software such as LVS mainly forwards request packets; in DR mode the request appears to come from the original client.

Reverse proxy receives the client request, forwards it to backend servers, and returns the response, so the backend sees the proxy as the client.

In short, LVS forwards packets, while Nginx reverse proxy re‑issues the request to the backend.

2. Nginx Load Balancing Modules

The main components are:

ngx_http_upstream_module

– load‑balancing module with health checks.

ngx_http_proxy_module

– proxy module that forwards requests to upstream servers.

2.1 Upstream Module

(1) Introduction

The upstream module allows defining one or more groups of backend servers. Requests can be sent to a named upstream group using

proxy_pass

, e.g.:

proxy_pass http://server_pools;

Here

server_pools

is the name of an upstream group.

(2) Configuration Example

upstream server_pools {
    server 192.168.1.251:80 weight=5;
    server 192.168.1.252:80 weight=10;
    server 192.168.1.253:80 weight=15;
}

(3) Parameters

server

– IP or domain of a backend server.

weight

– request weight (default 1). Larger weight receives more requests.

max_fails

– number of failed connection attempts before the server is considered unavailable.

fail_timeout

– time to wait before retrying a failed server (default 10s).

backup

– marks the server as a backup; it receives traffic only when all primary servers are down.

down

– permanently disables the server.

Example

upstream web_pools {
    server linux.example.com weight=5;
    server 127.0.0.1:8080 max_fails=5 fail_timeout=10s;
    server linux.example.com:8080 backup;
}

2.2 http_proxy_module

(1) proxy_pass Directive

The

proxy_pass

directive belongs to

ngx_http_proxy_module

. It forwards matched requests (via

location

) to an upstream pool.

(2) Usage Example

location /web/ {
    proxy_pass http://127.0.0.1/abc/;
}
Requests matching URI /web/ are proxied to http://127.0.0.1/abc/ .

(3) Common Parameters

proxy_set_header

– sets HTTP request headers for the backend (e.g., to pass the real client IP).

client_body_buffer_size

– size of the client request body buffer.

proxy_connect_timeout

– timeout for establishing a connection to the backend.

proxy_send_timeout

– timeout for sending data to the backend.

proxy_read_timeout

– timeout for reading a response from the backend.

proxy_buffer_size

– size of the buffer used for the first part of the response.

proxy_buffers

– number and size of buffers for the response.

proxy_busy_buffers_size

– size of buffers used when the system is busy.

proxy_temp_file_write_size

– size limit for temporary files.

3. Nginx Load Balancing Scheduling Algorithms

(1) Round Robin (rr) – default

Distributes requests sequentially across servers; failed servers are automatically removed.

upstream server_pools {
    server 192.168.1.251;
    server 192.168.1.252;
}
Note: With heterogeneous server performance, simple round robin may lead to uneven resource allocation.

(2) Weighted Round Robin (wrr)

Adds weight to each server; higher weight receives more requests.

upstream server_pools {
    server 192.168.1.251 weight=5;
    server 192.168.1.252 weight=10;
}
Useful when server capacities differ.

(3) ip_hash – session persistence

Hashes the client IP to consistently route the same client to the same backend, solving session‑sharing issues.

upstream server_pools {
    ip_hash;
    server 192.168.1.251;
    server 192.168.1.252;
}
If a primary server crashes, the session may be lost because the backup does not share the session state.

(4) fair – dynamic scheduling

Distributes requests based on backend response time; faster servers receive more traffic. Nginx does not support this natively; the

upstream_fair

module must be installed.

upstream server_pools {
    server 192.168.1.251;
    server 192.168.1.252;
    fair;
}

(5) url_hash – web cache nodes

Hashes the request URI so the same URL is always routed to the same backend. Requires the hash module.

upstream server_pools {
    server qll:9001;
    server qll:9002;
    hash $request_uri;
    hash_method crc32;
}

4. Nginx Load Balancing Configuration Example

(1) Expected Result

Visiting

http://www.qll.com

should distribute traffic evenly between two web servers.

(2) Preparation

Three Nginx servers:

Hostname

IP Address

Role

web01

10.43.187.251

Nginx web server

web02

10.43.187.252

Nginx web server

lb

10.43.187.253

Nginx load‑balancer

All three servers must have Nginx installed.

(c) Configure test web services

[root@web01 nginx]# cat conf/nginx.conf
worker_processes  1;
 events { worker_connections 1024; }
 http {
    include mime.types;
    default_type application/octet-stream;
    sendfile on;
    keepalive_timeout 65;
    server {
        listen 80;
        server_name localhost;
        location / {
            root html/www;
            index index.html index.htm;
        }
        access_log logs/access_www.log main;
    }
 }

Create a test file on each web server:

[root@web01 ~]# cd /usr/local/nginx/html/
mkdir www
echo "`hostname -I` www" > www/index.html

(d) Configure the load‑balancer

[root@lb01 nginx]# cat conf/nginx.conf
worker_processes  1;
 events { worker_connections 1024; }
 http {
    include mime.types;
    default_type application/octet-stream;
    sendfile on;
    keepalive_timeout 65;
    upstream www_server_pools {
        server 10.43.187.251:80 weight=1;
        server 10.43.187.252:80 weight=1;
    }
    server {
        listen 80;
        server_name www.qll.com;
        location / {
            proxy_pass http://www_server_pools;
        }
    }
 }

(e) DNS / hosts configuration

Because this is a test environment, add the following line to the hosts file (e.g.,

C:\Windows\System32\drivers\etc\hosts

):

10.43.187.253 www.qll.com

(3) Test Verification

Open a browser and visit

www.qll.com

. Refresh repeatedly; requests should be evenly distributed between

web01

(10.43.187.251) and

web02

(10.43.187.252), confirming load balancing.

Nginx load balancing test result
Nginx load balancing test result
Load Balancingconfigurationnginxreverse-proxyserverUpstream
Open Source Linux
Written by

Open Source Linux

Focused on sharing Linux/Unix content, covering fundamentals, system development, network programming, automation/operations, cloud computing, and related professional knowledge.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.