Operations 11 min read

Understanding Linux Virtual Server (LVS) Load Balancing: Principles, Implementation Methods, and Scheduling Algorithms

This article explains the role of load balancers in large-scale internet applications, introduces Linux Virtual Server (LVS) as a four‑layer software load‑balancing solution, describes its architecture, NAT/TUN/DR forwarding methods, and details various static and dynamic scheduling algorithms such as Round Robin, Weighted Least‑Connection, and locality‑based strategies.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
Understanding Linux Virtual Server (LVS) Load Balancing: Principles, Implementation Methods, and Scheduling Algorithms

Introduction

In large‑scale internet applications, load‑balancing devices are essential for handling high concurrency and traffic. They act as the entry point for client requests, select the most suitable server, and transparently forward traffic to that server.

What Is LVS?

LVS (Linux Virtual Server) is a four‑layer (transport‑layer) software load balancer that supports TCP/UDP. It originated as a free software project and is now part of the standard Linux kernel, requiring no additional patches since kernel 2.4.

Architecture

LVS clusters typically use a three‑tier structure:

Load Balancer (LB) : the front‑end device that receives client requests on a virtual IP (VIP) and forwards them to the server pool.

Server Pool (Real Servers, RS) : the actual machines that process the requests.

Shared Storage : provides a common data store for the server pool.

Forwarding Methods

LVS forwards traffic by modifying IP addresses or MAC addresses. The three main techniques are:

NAT (Network Address Translation) : changes the destination IP (DNAT) or source IP (SNAT) of packets before forwarding.

TUN (IP Tunneling) : encapsulates the client packet in an IP tunnel and sends it to the selected server; the server replies directly to the client.

DR (Direct Routing) : only the MAC address is altered; the packet reaches the chosen server directly, and the server responds without passing through the LB.

Scheduling Algorithms

Static Scheduling

RR (Round Robin): cycles through servers sequentially.

WRR (Weighted Round Robin): assigns weights to servers based on capacity.

SH (Source Hashing): hashes the client’s source IP to a specific server, providing session persistence.

DH (Destination Hashing): hashes the destination IP to a server, ensuring the same destination IP is consistently routed.

Dynamic Scheduling

LC (Least‑Connection): selects the server with the fewest active connections.

wLC (Weighted Least‑Connection): extends LC by considering server weights.

SED (Shortest Expected Delay): estimates delay based on active connections and weight.

NQ (Never Queue): a simplified version of SED that avoids queuing when a server has zero connections.

LBLC (Locality‑Based Least Connection): aims to improve cache locality by keeping requests from the same IP on the same server.

LBLCR (Locality‑Based Least Connection with Replication): adds replication to LBLC to balance load when a server becomes overloaded.

Using LVS on Linux

The core component is the IPVS kernel module. After installing IPVS on a director server, a virtual IP (VIP) is created. Clients connect to the VIP, the director selects a real server according to the chosen scheduling algorithm, and the request is forwarded accordingly.

For further in‑depth study, the article references a comprehensive e‑book collection on architecture technologies.

Operationsload balancingNetworkLinuxscheduling algorithmsLVS
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.