Operations 5 min read

How LVS Load Balancing Powers High‑Concurrency Web Apps: Architecture & Flow Explained

This article explains the principles of LVS load balancing, its benefits, the Hulk LVS cluster architecture, request hashing mechanisms, system‑level high availability, and the HULK LVS forwarding panel, providing a comprehensive guide for building resilient high‑traffic web services.

360 Zhihui Cloud Developer
360 Zhihui Cloud Developer
360 Zhihui Cloud Developer
How LVS Load Balancing Powers High‑Concurrency Web Apps: Architecture & Flow Explained

1. What is Load Balancing

LVS load balancing distributes user requests to multiple backend servers (physical machines, virtual machines, containers, etc.) based on defined forwarding policies. It uses Linux Virtual Server (LVS) together with keepalived, assigns a virtual service address (VIP), and automatically removes unhealthy servers through health checks, enhancing overall service capacity and availability.

2. Benefits of Load Balancing

Protects backend real servers (RS) by not exposing them directly to users.

Performs health checks on backend RS.

Distributes traffic to backend RS according to configured strategies.

3. Hulk LVS Cluster Architecture

The cluster is built using BGP + ECMP. ECMP hashes packets to different nodes, and BGP dynamically removes routes of failed machines, achieving automatic failover.

4. How User Requests Are Distributed to LVS Servers

LVS servers publish the VIP via BGP; dedicated LVS switches learn the VIP and form equal‑cost multi‑path routes (ECMP). A hash factor generates a hash‑lb key, which is used with ECMP next‑hop count to compute the forwarding index, determining the next hop to the appropriate LVS server.

5. System‑Level High Availability

LVS servers use dual 10‑GbE NICs. In the diagram, ETH1 and ETH3 are uplink ports to TOR switches. If the upstream TOR fails, traffic dynamically switches to ETH3, ensuring continuous service.

6. HULK LVS Forwarding Panel Overview

Users access the service using (CIP+CPORT) to reach (VIP+VPORT), which arrives at the LVS server.

The LVS server maintains a session table, translating (CIP+CPORT) to (BIP+BPORT) and (VIP+VPORT) to (RIP+RPORT) before forwarding to backend RS.

Backend RS receives the packet, sees its own IP (RIP) and port, builds a response, and sends it back to (BIP+BPORT).

Upon receiving the RS reply, the LVS server uses the session table to map (RIP+RPORT) back to (VIP+VPORT) and (BIP+BPORT) back to (CIP+CPORT), returning the response to the user.

Note: Because the client IP is replaced by BIP during forwarding, the real client IP is not directly visible; kernel modules TOA and TTM must be applied to extract the real IP from TCP options and place it into the kernel socket.

network architectureHigh Availabilityload balancingBGPLVSECMP
360 Zhihui Cloud Developer
Written by

360 Zhihui Cloud Developer

360 Zhihui Cloud is an enterprise open service platform that aims to "aggregate data value and empower an intelligent future," leveraging 360's extensive product and technology resources to deliver platform services to customers.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.