Operations 9 min read

Mastering IPVS: Build High‑Performance Load Balancers with LVS

This article explains the concept of IPVS (IP Virtual Server) as a layer‑4 load balancer, compares ipvs with iptables, details LVS scheduling algorithms, and provides step‑by‑step commands for configuring ipvsadm on both load‑balancer and real‑server nodes, including VIP setup and client testing.

Raymond Ops
Raymond Ops
Raymond Ops
Mastering IPVS: Build High‑Performance Load Balancers with LVS

Concept

IPVS (IP Virtual Server) implements transport‑layer (layer‑4) load balancing as part of the Linux kernel. It runs on the host and acts as a load balancer in front of a real‑server cluster, forwarding TCP and UDP service requests to the real servers and presenting them as a single virtual service.

ipvs vs. iptables

kube-proxy supports both iptables and ipvs modes. ipvs mode was introduced in Kubernetes v1.8, entered beta in v1.9, and became stable in v1.11. iptables mode has been supported since v1.1 and became the default from v1.2. Both modes rely on

netfilter

, but they differ in scalability and features.

ipvs provides better scalability and performance for large clusters.

ipvs supports more complex scheduling algorithms (least load, least connections, weighted, etc.).

ipvs offers health‑checking and connection‑retry capabilities.

ipvs depends on iptables

ipvs uses iptables for packet filtering, SNAT, and masquerading. Specifically, ipvs stores DROP or MASQUERADE rules in an

ipset

to keep the number of iptables rules constant, regardless of the number of services.

LVS scheduling algorithms

1. Round Robin (rr) Requests are distributed cyclically across servers, assuming equal processing capacity.

2. Weighted Round Robin (wrr) Adds a weight (0‑100) to each server; higher weight receives proportionally more requests.

3. Least Connections (lc) Directs traffic to the server with the fewest active connections.

4. Weighted Least Connections (wlc) Combines weight with the least‑connections principle.

5. Locality‑Based Least Connections (lblc) Selects the nearest server that can handle the request based on the destination IP.

6. Locality‑Based Least Connections with Recovery (lblcr) Maintains a mapping of destination IP to a set of servers to avoid overloading a single node.

7. Destination Hash (dh) Hashes the destination IP to map it to a server; the mapping persists even if the server becomes overloaded.

8. Source Hash (sh) Similar to dh but hashes the source IP, providing a static assignment of clients to servers.

ipvsadm parameters

<code>Add virtual server
    Syntax: ipvsadm -A [-t|u|f] [vip_addr:port] [-s <scheduler>]
    -A: add
    -t: TCP protocol
    -u: UDP protocol
    -f: firewall mark
    -D: delete virtual server entry
    -E: edit virtual server entry
    -C: clear all entries
    -L: list
Add backend RealServer
    Syntax: ipvsadm -a [-t|u|f] [vip_addr:port] -r <ip_addr> [-g|i|m] [-w <weight>]
    -a: add
    -t: TCP protocol
    -u: UDP protocol
    -f: firewall mark
    -r: specify backend real server IP
    -g: DR mode
    -i: TUN mode
    -m: NAT mode
    -w: specify weight
    -d: delete realserver entry
    -e: edit realserver entry
    -l: list
General:
    ipvsadm -ln: list rules
    service ipvsadm save: save rules</code>

Load balancer side

<code>Install LVS
    [root@lb01 ~]# yum -y install ipvsadm
    [root@lb01 ~]# ipvsadm
Add and bind VIP
    [root@lb01 ~]# ip addr add 192.168.0.89/24 dev eth0 label eth0:1
Configure LVS‑DR mode
    [root@lb01 ~]# ipvsadm -A -t 192.168.0.89:80 -s rr   // create a DR virtual service with round‑robin scheduler
    [root@lb01 ~]# ipvsadm -a -t 192.168.0.89:80 -r 192.168.0.93 -g   // add real server
    [root@lb01 ~]# ipvsadm -a -t 192.168.0.89:80 -r 192.168.0.94 -g   // add real server</code>

Real‑Server side

<code>Configure test backend realserver
    (httpd configuration omitted)
    [root@realserver-1 ~]# curl 192.168.0.93   # test realserver‑1 website
    192.168.0.93
    [root@realserver-2 ~]# curl 192.168.0.94   # test realserver‑2 website
    192.168.0.94
Bind VIP to lo interface
    [root@realserver-1 ~]# ip addr add 192.168.0.89/32 dev lo label lo:1   # DR mode requires realserver to have the VIP
Suppress ARP
    [root@realserver-1 ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
    [root@realserver-1 ~]# echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
    [root@realserver-1 ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
    [root@realserver-1 ~]# echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore</code>

Client test

<code>[root@test ~]# curl 192.168.0.89
192.168.0.93
[root@test ~]# curl 192.168.0.89
192.168.0.94</code>
Load Balancingscheduling algorithmsLinux networkingLVSkube-proxyipvsipvsadm
Raymond Ops
Written by

Raymond Ops

Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.