Introduction to LVS Load Balancing and Its Scheduling Strategies
This article introduces LVS, a layer‑4 load‑balancing tool, explains its advantages over layer‑7 solutions like Nginx, describes how combining LVS with Nginx and Keepalived creates a highly available, horizontally scalable architecture, and details the three scheduling modes VS/NAT, VS/TUN, and VS/DR.
1. Introduction LVS is a layer‑4 load‑balancing tool that operates at the transport, network, data‑link, and physical layers, supporting TCP and UDP protocols. Because it works at layer 4, its request‑handling capacity far exceeds that of typical layer‑7 servers such as Nginx, whose load‑balancing ability is roughly one‑tenth of LVS. While application servers can be horizontally scaled, Nginx itself does not support horizontal scaling.
By integrating LVS with Nginx, multiple Nginx instances can be deployed behind LVS, which distributes incoming requests across them. Nginx then forwards traffic to the backend application servers, achieving horizontal scaling of Nginx. Since Nginx can also fail, adding Keepalived provides health checking and failover, resulting in a highly available Nginx cluster built on Keepalived + LVS + Nginx.
2. Schedulers and Load‑Balancing Strategies LVS defines two key concepts: the scheduler, which determines how requests and responses are processed, and the load‑balancing strategy. There are three main scheduler types.
Virtual Server via Network Address Translation (VS/NAT) – Clients send requests to a virtual IP. LVS selects a target server based on a load‑balancing algorithm, rewrites the destination IP in the packet to the chosen server, and forwards it. For responses, LVS rewrites the source IP back to the virtual IP before sending to the client. This mode appears to the client as a single server, but all response traffic passes through the scheduler, which can become a bottleneck under heavy load.
Virtual Server via IP Tunneling (VS/TUN) – Similar to VS/NAT, LVS rewrites the destination IP, but after the target server processes the request, it sends the response directly to the client with the source IP changed to the virtual IP. This eliminates the need for the scheduler to handle response traffic, greatly improving throughput because only request packets traverse the scheduler.
Virtual Server via Direct Routing (VS/DR) – Unlike VS/TUN, which modifies the IP address, VS/DR changes the MAC address in the request packet to point directly to the target server. This reduces processing overhead even further, as the scheduler does not need to perform IP address translation for the response.
Practical DevOps Architecture
Hands‑on DevOps operations using Docker, K8s, Jenkins, and Ansible—empowering ops professionals to grow together through sharing, discussion, knowledge consolidation, and continuous improvement.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.