Operations 14 min read

Understanding Load Balancing: Types, LVS Modes, and Scheduling Algorithms

This article explains what load balancing is, why it’s needed, outlines its classifications—including DNS, hardware, and software methods—introduces Linux Virtual Server (LVS) and its four operating modes, and reviews common static and dynamic scheduling algorithms used to distribute traffic efficiently.

Cyber Elephant Tech Team
Cyber Elephant Tech Team
Cyber Elephant Tech Team
Understanding Load Balancing: Types, LVS Modes, and Scheduling Algorithms

1. What is Load Balancing

Load balancing is a computer technology used to distribute load across multiple computers (clusters), network connections, CPUs, disk drives, or other resources to optimize resource usage, maximize throughput, minimize response time, and avoid overload. Using multiple server components with load balancing instead of a single component improves redundancy and reliability. Load balancing services are usually implemented by dedicated software or hardware and aim to distribute large numbers of jobs across multiple execution units, solving high concurrency and high availability problems in internet architectures.

2. Why Load Balancing is Needed

When a single server can no longer handle normal traffic in production, load balancing distributes user requests across all functional nodes of a backend cluster, improving fault tolerance and user experience.

3. Types of Load Balancing

3.1 DNS Load Balancing

The idea is that DNS resolution for the same domain can return different IP addresses, enabling geographic distribution (e.g., northern users to Beijing data center, southern users to Shenzhen). Advantages: simple, low cost, handled by DNS servers, and provides proximity access. Disadvantages: long DNS cache times cause delayed updates, and DNS control resides with the domain registrar, limiting customization.

3.2 Hardware Load Balancing

Implemented via dedicated hardware devices such as F5, A10. Advantages: powerful features, support for multiple load‑balancing levels and algorithms, high performance (over 1 million concurrent connections), high stability, and integrated security functions (firewall, DDoS protection). Disadvantages: expensive (tens of thousands), limited scalability.

3.3 Software Load Balancing

Implemented with software such as Nginx (layer‑7) and LVS (layer‑4). Compared with hardware, performance is lower (Nginx ~50 k requests/s vs. hardware millions) but cost is much cheaper. Advantages: simple, flexible, inexpensive. Disadvantages: performance generally lower than hardware solutions.

4. LVS Overview

LVS (Linux Virtual Server) is a virtual server cluster system, originally created in 1998 and now integrated into the Linux kernel. Official site: http://www.linuxvirtualserver.org/zh/

5. LVS Load‑Balancing Architecture Diagram

LVS architecture diagram
LVS architecture diagram

6. LVS Terminology

CIP – client IP address

VIP – public IP of LVS (client‑facing)

DIP – internal IP of LVS (communicates with real servers)

RIP – real server’s actual IP address

7. LVS Operating Modes and Their Pros/Cons

7.1 NAT Mode (LVS‑NAT)

LVS NAT mode diagram
LVS NAT mode diagram

In NAT mode the load balancer rewrites the destination IP of client packets to the IP of a selected real server (RS). The RS processes the request and sends the response back to the load balancer, which then rewrites the source IP back to its own IP before forwarding to the client. All traffic passes through the load balancer.

Advantages: any TCP/IP OS can be used on RS; only the load balancer needs a legal IP.

Disadvantages: limited scalability; the load balancer becomes a bottleneck as all request and response packets traverse it.

7.2 LVS FULL NAT Mode

LVS Full NAT diagram
LVS Full NAT diagram

FULL NAT replaces both destination and source IPs of the client packet when it reaches the VIP, and also rewrites the source IP when the response returns, ensuring that the real server’s reply always goes back through the LVS. This mode does not require the LB IP and real server IP to be in the same subnet.

Compared with NAT, FULL NAT guarantees that replies from RS reach the LVS, but performance is about 10 % lower due to additional source‑IP rewriting.

7.3 IP Tunnel Mode (LVS‑TUN)

LVS Tunnel mode diagram
LVS Tunnel mode diagram

Because client requests are usually small while responses are large, TUN mode encapsulates the client packet with a new IP header (only destination IP) and forwards it to the RS. The RS decapsulates, processes, and sends the response directly to the client, bypassing the load balancer. RS must support the IPTUNNEL protocol and have the kernel option enabled.

Advantages: the load balancer only forwards request packets, reducing its data load and allowing it to handle massive traffic.

Disadvantages: RS nodes must have a legal IP and support IP tunneling, which may limit applicability to certain Linux distributions.

7.4 Direct Routing Mode (LVS‑DR)

LVS Direct Routing diagram
LVS Direct Routing diagram

In DR mode both the load balancer and RS share the same virtual IP. Only the load balancer responds to ARP requests for that IP; RS remains silent. The gateway directs traffic for the virtual IP to the load balancer, which selects an RS and rewrites the destination MAC address. The RS processes the packet and, because the IP is unchanged, can reply directly to the client. This requires the load balancer and RS to be on the same broadcast domain.

Advantages: similar to TUN mode, the load balancer only distributes requests, while responses bypass it; works with most operating systems.

Disadvantages: the load balancer’s network interface must share the same physical segment as the servers.

8. Load‑Balancing Scheduling Algorithms

LVS provides static and dynamic scheduling algorithms.

Static algorithms:

RR – Round Robin: distributes requests sequentially across servers.

WRR – Weighted Round Robin: assigns more requests to servers with higher capacity.

DH – Destination Hash: hashes the destination IP to select a server.

SH – Source Hash: hashes the source IP to select a server.

Dynamic algorithms:

LC – Least Connections: directs traffic to the server with the fewest active connections.

WLC – Weighted Least Connections: similar to LC but considers server weight.

SED – Shortest Expected Delay: selects server based on calculated delay, factoring active connections and weight.

NQ – Never Queue (Never Queue Scheduling): assigns a request to any server with zero connections, avoiding queue calculations.

Article source: 运维团队 (Operations team) from 一点资讯.

network architectureOperationsload balancingscheduling algorithmsLVS
Cyber Elephant Tech Team
Written by

Cyber Elephant Tech Team

Official tech account of Cyber Elephant, a platform for the group's technology innovation, sharing, and communication.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.