How to Build a High‑Availability Nginx Load Balancer with Keepalived on CentOS 8
This guide walks you through setting up Nginx as a reverse‑proxy and load balancer, configuring Keepalived for high availability, writing monitoring scripts, and testing the failover on a CentOS 8 environment with multiple web servers.
nginx Load Balancing Introduction
nginx can be used for load balancing to distribute high traffic across multiple servers, improving system throughput, scalability, and reliability; if a server fails, the others continue to serve requests.
Reverse Proxy and Load Balancing
nginx often acts as a reverse proxy for backend servers, enabling static and dynamic content separation and improving processing capacity. Static resources are served directly by nginx, while dynamic requests are proxied to backend services.
nginx Load Balancing Configuration
Define an
upstreamblock in the
httpsection to list backend servers. By default, nginx uses round‑robin;
ip_hashcan be added to keep a client’s requests on the same server, though it still relies on round‑robin at its core.
upstream idfsoft.com {
ip_hash;
server 127.0.0.1:9080 weight=5;
server 127.0.0.1:8080 weight=5;
server 127.0.0.1:1111;
}In the
serverblock, forward requests to the upstream group:
location / {
proxy_pass http://idfsoft.com;
}Keepalived High‑Availability nginx Load Balancer
Three hosts are used: a master nginx load balancer (192.168.222.250), a backup load balancer (192.168.222.139), and two web servers (Web1 on 192.168.222.137 running Apache, Web2 on 192.168.222.138 running nginx). A virtual IP (VIP) 192.168.222.133 is shared between master and backup.
Install Keepalived
On both master and backup, install the keepalived package:
# dnf -y install keepalivedConfigure Keepalived
Master configuration (
/etc/keepalived/keepalived.conf) sets
state MASTER, a higher priority, the virtual IP, and a
vrrp_scriptto monitor nginx:
global_defs {
router_id lb01
}
vrrp_script nginx_check {
script "/scripts/check_nginx.sh"
interval 5
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass tushanbu
}
virtual_ipaddress {
192.168.222.133
}
track_script {
nginx_check
}
notify_master "/scripts/notify.sh master"
}Backup configuration is similar but with
state BACKUPand a lower priority, plus
notify_masterand
notify_backuphooks:
global_defs {
router_id lb02
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass tushanbu
}
virtual_ipaddress {
192.168.222.133
}
notify_master "/scripts/notify.sh master"
notify_backup "/scripts/notify.sh backup"
}Write Monitoring Scripts
Create
/scripts/check_nginx.shto stop keepalived if nginx is not running, and
/scripts/notify.shto start or stop nginx when the node becomes master or backup:
# /scripts/check_nginx.sh
#!/bin/bash
nginx_status=$(ps -ef | grep -Ev "grep|$0" | grep '\bnginx\b' | wc -l)
if [ $nginx_status -lt 1 ]; then
systemctl stop keepalived
fi
# /scripts/notify.sh
#!/bin/bash
case "$1" in
master)
nginx_status=$(ps -ef | grep -Ev "grep|$0" | grep '\bnginx\b' | wc -l)
if [ $nginx_status -lt 1 ]; then
systemctl start nginx
fi
;;
backup)
nginx_status=$(ps -ef | grep -Ev "grep|$0" | grep '\bnginx\b' | wc -l)
if [ $nginx_status -gt 0 ]; then
systemctl stop nginx
fi
;;
*)
echo "Usage:$0 master|backup"
;;
esacTesting Failover
Start services on the master, verify the VIP is on the master, then stop nginx on the master. Keepalived detects the failure, releases the VIP, and the backup node acquires it, automatically starting nginx on the backup. Restoring nginx on the master causes the VIP to move back.
Use
curl 192.168.222.133to see responses from Apache and nginx, confirming load balancing and high availability.
Raymond Ops
Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.