Operations 15 min read

Mastering Load Balancing: LVS, Nginx, and HAProxy Explained

This article provides a comprehensive overview of server clustering and load balancing technologies, detailing the roles of LVS, Nginx, and HAProxy, their architectures, operating modes, advantages, disadvantages, and practical deployment scenarios for modern web services.

Efficient Ops
Efficient Ops
Efficient Ops
Mastering Load Balancing: LVS, Nginx, and HAProxy Explained

Most modern internet systems use server clusters, deploying identical services across multiple machines to provide a unified service. Clusters can be web application servers, database servers, or distributed cache servers.

Typically a load‑balancing server sits in front of a web‑server cluster, acting as the entry point and forwarding client requests to the most suitable web server.

Cloud computing and distributed architectures essentially package backend servers as compute and storage resources, presenting them as a seemingly limitless service to clients, while the actual work is performed by the underlying cluster.

LVS, Nginx, and HAProxy are the three most widely used software load balancers.

Choosing a load‑balancing solution depends on site scale: for small to medium sites (PV < 10 million), Nginx suffices; larger sites may use DNS round‑robin or LVS; very large or critical services often adopt LVS.

Common architecture: Web front‑end uses Nginx/HAProxy + Keepalived as the load balancer; backend uses MySQL master‑slave with read/write separation, often combined with LVS + Keepalived.

LVS

LVS (Linux Virtual Server) is now part of the standard Linux kernel (since 2.4) and requires no patches.

It has matured since its inception in 1998.

1. LVS Architecture

LVS clusters consist of three layers: (1) Load Balancer layer, (2) Server Array layer, (3) Shared Storage layer.

2. LVS Load‑Balancing Mechanism

LVS operates at Layer 4 (transport layer) and balances TCP/UDP traffic, offering higher efficiency than Layer 7 solutions.

Layer 4 balancing uses destination IP/port, while Layer 7 (content switching) inspects application‑level data.

LVS forwards packets by modifying IP addresses (NAT mode: SNAT and DNAT) or MAC addresses (DR mode).

3. NAT Mode

NAT (Network Address Translation) maps external to internal addresses. In DNAT, LVS changes the destination IP to the real server’s IP; the real server replies with its own IP, which LVS then SNATs back to the virtual IP (VIP), making the client think LVS responded directly.

4. DR Mode

In Direct Routing (DR), LVS and real servers share the same VIP. LVS only rewrites the MAC address to forward packets; IP addresses remain unchanged, allowing the real server to reply directly to the client, eliminating the load balancer as a bottleneck.

DR offers high performance and is widely used by large websites.

5. Advantages of LVS

Strong load capacity, operates at transport layer with minimal CPU/memory usage.

Simple configuration reduces human error.

Stable operation with built‑in high‑availability (e.g., LVS + Keepalived).

No traffic passes through the balancer, preserving I/O performance.

Broad applicability to HTTP, databases, chat services, etc.

6. Disadvantages of LVS

Cannot process regular expressions; lacks content‑based routing (a strength of Nginx/HAProxy).

Complex to deploy for very large sites compared to Nginx/HAProxy.

Nginx

Nginx is a high‑performance web server and reverse proxy that excels at handling massive concurrent HTTP requests with low memory consumption.

1. Nginx Architecture

Unlike process‑oriented servers (e.g., Apache), Nginx uses an event‑driven, asynchronous, single‑threaded model with a master process and multiple worker processes, sharing memory for inter‑process communication.

Each worker handles many connections concurrently without blocking, similar to Netty.

2. Nginx Load Balancing

Nginx performs Layer 7 (application‑layer) load balancing for HTTP/HTTPS via reverse proxy.

Supported upstream strategies include:

Round‑robin (default)

Weight‑based distribution

IP hash (session persistence)

Fair (third‑party, based on response time)

URL hash (third‑party, directs same URL to same backend)

3. Advantages of Nginx

Cross‑platform support.

Simple configuration.

Non‑blocking, high concurrency (tested up to 50 k concurrent connections).

Event‑driven epoll model.

Master/worker process model.

Low memory usage (e.g., 10 workers consume ~150 MB for 30 k connections).

Built‑in health checks.

Bandwidth saving via GZIP and caching headers.

High stability as a reverse proxy.

4. Disadvantages of Nginx

Only supports HTTP, HTTPS, and email protocols.

Health checks limited to port probing; no URL‑level checks.

Session persistence requires workarounds like ip_hash.

HAProxy

HAProxy supports both TCP (Layer 4) and HTTP (Layer 7) proxy modes and virtual hosting.

It complements Nginx by offering session persistence, cookie‑based routing, and URL‑based health checks.

HAProxy generally provides higher load‑balancing performance than Nginx and can balance MySQL read traffic, often used together with LVS + Keepalived for database clustering.

Supported HAProxy load‑balancing algorithms include round‑robin, weighted round‑robin, source (origin IP), RI (request URL), and rdp‑cookie.

Reference: https://zhongwuzw.github.io, http://www.importnew.com/11229.html, http://edisonchou.cnblogs.com

operationsLoad BalancingNginxNetworkingHAProxyLVSserver clusters
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.