Operations 11 min read

How to Build a Highly Available Load Balancer with LVS and Keepalived

This tutorial explains how to design and deploy a high‑availability web cluster using Linux Virtual Server (LVS) and Keepalived, covering terminology, test environment setup, detailed configuration steps, HA testing procedures, and a concise summary of the solution.

Raymond Ops
Raymond Ops
Raymond Ops
How to Build a Highly Available Load Balancer with LVS and Keepalived

Introduction

When traffic reaches a certain level, a single‑node service becomes a bottleneck. Load balancing with Nginx is common, but the load‑balancer itself can fail, so a highly available solution is needed. This article introduces a high‑availability web cluster based on LVS + Keepalived.

LVS and Keepalived

LVS (Linux Virtual Server) is a layer‑4 reverse‑proxy built into Linux; ipvsadm is its command‑line tool. Its main features are:

Layer‑4 protocol, strong load‑handling, minimal hardware requirements beyond the NIC.

Low configurability, which reduces human error.

Broad applicability, can balance web services and other applications such as MySQL.

Requires a virtual IP (VIP) that must be allocated from the IDC.

Keepalived implements VRRP to provide high availability, avoiding single‑point‑of‑failure for the VIP. It works well with LVS and other load‑balancers like HAProxy or Nginx.

Key Terminology

LB – Load Balancer

HA – High Availability

Failover – automatic switch when a node fails

Cluster – group of nodes providing a service

LVS – Linux Virtual Server

DS (Director Server) – front‑end load‑balancer node

RS (Real Server) – back‑end real server

VIP – Virtual IP address presented to clients

DIP – Director IP used for internal communication

RIP – Real Server IP

CIP – Client IP

Test Environment

Software: CentOS 7, Keepalived 1.3.5, ipvsadm 1.27

DS1 (MASTER): 172.17.13.120

DS1 (BACKUP): 172.17.13.123

RS1: 172.17.13.142:80 (Nginx)

RS2: 172.17.13.173:80 (Nginx)

VIP: 172.17.13.252

<code>|----------------+-----------------|
| 172.17.13.120 |---- VIP:172.17.13.252 ----| 172.17.13.123
+-------+--------+   +--------+-------+
| DS1   |        |   | DS2   |
| LVS+Keepalived |   | LVS+Keepalived |
+-------+--------+   +--------+-------+
|                |
+----------------+-----------------+
|                |
+------------+   |   +------------+
| RS1 172.17.13.142 |   | RS2 172.17.13.173 |
| Web Server +--------------+---------------+ Web Server |
+------------+   +------------+</code>

Detailed Configuration Steps

Install required packages

<code># yum install ipvsadm keepalived -y</code>

Configure Keepalived on the MASTER node

<code># vi /etc/keepalived/keepalived.conf
global_defs {
    router_id LVS_DEVEL
}
vrrp_instance VI_1 {
    state MASTER
    interface enp1s0
    virtual_router_id 62
    priority 200
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.17.13.252
    }
}
virtual_server 172.17.13.252 80 {
    delay_loop 3
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP
    real_server 172.17.13.173 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            retry 3
            delay_before_retry 3
            connect_port 80
        }
    }
    real_server 172.17.13.142 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            retry 3
            delay_before_retry 3
            connect_port 80
        }
    }
}</code>

Configure the BACKUP node

Copy the configuration file from the MASTER and change

state

to

BACKUP

, then restart Keepalived on both nodes.

<code># systemctl restart keepalived</code>

Configure Real Servers

Each RS needs a script to bind the VIP to the loopback interface and adjust ARP settings:

<code>#!/bin/bash
SNS_VIP=172.17.13.252
case "$1" in
start)
    ifconfig lo:0 $SNS_VIP netmask 255.255.255.255 broadcast $SNS_VIP
    /sbin/route add -host $SNS_VIP dev lo:0
    echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce
    echo "1" > /proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2" > /proc/sys/net/ipv4/conf/all/arp_announce
    sysctl -p >/dev/null 2>&1
    echo "RealServer Start OK"
    ;;
stop)
    ifconfig lo:0 down
    route del $SNS_VIP >/dev/null 2>&1
    echo "0" > /proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "0" > /proc/sys/net/ipv4/conf/lo/arp_announce
    echo "0" > /proc/sys/net/ipv4/conf/all/arp_ignore
    echo "0" > /proc/sys/net/ipv4/conf/all/arp_announce
    echo "RealServer Stopped"
    ;;
*)
    echo "Usage: $0 {start|stop}"
    exit 1
    ;;
esac
exit 0</code>

Make the script executable and start it:

<code># chmod a+x lvs-web.sh
# ./lvs-web.sh start</code>

HA Testing

After both LB nodes are running, verify the VIP is present with

ip a

. Use

watch ipvsadm -Ln --stats

to monitor traffic distribution. Curl the VIP in a loop to see round‑robin responses.

Stop one RS; LVS automatically removes it from the pool and adds it back when it recovers. Stop the MASTER Keepalived process; the VIP floats to the BACKUP node and returns to MASTER when it comes back online, demonstrating full HA.

Conclusion

The article shows how to build a highly available load‑balancing cluster with LVS and Keepalived, providing a stable service platform. Keepalived runs on top of LVS with good compatibility, and Nginx can be used as an alternative LB depending on business needs.

high availabilityload-balancingnetworklinuxLVSKeepalived
Raymond Ops
Written by

Raymond Ops

Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.