Operations 26 min read

How to Build a High‑Availability Nginx Load Balancer with Keepalived on CentOS 8

This guide walks you through setting up Nginx as a reverse‑proxy and load balancer, configuring Keepalived for high availability, writing monitoring scripts, and testing the failover on a CentOS 8 environment with multiple web servers.

Raymond Ops
Raymond Ops
Raymond Ops
How to Build a High‑Availability Nginx Load Balancer with Keepalived on CentOS 8

nginx Load Balancing Introduction

nginx can be used for load balancing to distribute high traffic across multiple servers, improving system throughput, scalability, and reliability; if a server fails, the others continue to serve requests.

Reverse Proxy and Load Balancing

nginx often acts as a reverse proxy for backend servers, enabling static and dynamic content separation and improving processing capacity. Static resources are served directly by nginx, while dynamic requests are proxied to backend services.

nginx Load Balancing Configuration

Define an

upstream

block in the

http

section to list backend servers. By default, nginx uses round‑robin;

ip_hash

can be added to keep a client’s requests on the same server, though it still relies on round‑robin at its core.

upstream idfsoft.com {
    ip_hash;
    server 127.0.0.1:9080 weight=5;
    server 127.0.0.1:8080 weight=5;
    server 127.0.0.1:1111;
}

In the

server

block, forward requests to the upstream group:

location / {
    proxy_pass http://idfsoft.com;
}

Keepalived High‑Availability nginx Load Balancer

Three hosts are used: a master nginx load balancer (192.168.222.250), a backup load balancer (192.168.222.139), and two web servers (Web1 on 192.168.222.137 running Apache, Web2 on 192.168.222.138 running nginx). A virtual IP (VIP) 192.168.222.133 is shared between master and backup.

Diagram
Diagram

Install Keepalived

On both master and backup, install the keepalived package:

# dnf -y install keepalived

Configure Keepalived

Master configuration (

/etc/keepalived/keepalived.conf

) sets

state MASTER

, a higher priority, the virtual IP, and a

vrrp_script

to monitor nginx:

global_defs {
    router_id lb01
}

vrrp_script nginx_check {
    script "/scripts/check_nginx.sh"
    interval 5
    weight -20
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu
    }
    virtual_ipaddress {
        192.168.222.133
    }
    track_script {
        nginx_check
    }
    notify_master "/scripts/notify.sh master"
}

Backup configuration is similar but with

state BACKUP

and a lower priority, plus

notify_master

and

notify_backup

hooks:

global_defs {
    router_id lb02
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu
    }
    virtual_ipaddress {
        192.168.222.133
    }
    notify_master "/scripts/notify.sh master"
    notify_backup "/scripts/notify.sh backup"
}

Write Monitoring Scripts

Create

/scripts/check_nginx.sh

to stop keepalived if nginx is not running, and

/scripts/notify.sh

to start or stop nginx when the node becomes master or backup:

# /scripts/check_nginx.sh
#!/bin/bash
nginx_status=$(ps -ef | grep -Ev "grep|$0" | grep '\bnginx\b' | wc -l)
if [ $nginx_status -lt 1 ]; then
    systemctl stop keepalived
fi

# /scripts/notify.sh
#!/bin/bash
case "$1" in
    master)
        nginx_status=$(ps -ef | grep -Ev "grep|$0" | grep '\bnginx\b' | wc -l)
        if [ $nginx_status -lt 1 ]; then
            systemctl start nginx
        fi
        ;;
    backup)
        nginx_status=$(ps -ef | grep -Ev "grep|$0" | grep '\bnginx\b' | wc -l)
        if [ $nginx_status -gt 0 ]; then
            systemctl stop nginx
        fi
        ;;
    *)
        echo "Usage:$0 master|backup"
        ;;
esac

Testing Failover

Start services on the master, verify the VIP is on the master, then stop nginx on the master. Keepalived detects the failure, releases the VIP, and the backup node acquires it, automatically starting nginx on the backup. Restoring nginx on the master causes the VIP to move back.

Use

curl 192.168.222.133

to see responses from Apache and nginx, confirming load balancing and high availability.

Test Result
Test Result
high availabilityload balancingnginxreverse proxyCentOSkeepalived
Raymond Ops
Written by

Raymond Ops

Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.