Achieve Web High Availability and Static/Dynamic Separation with HAProxy & Keepalived
This article walks through implementing web high availability and static‑dynamic content separation using HAProxy combined with Keepalived, covering load‑balancing concepts, VRRP basics, step‑by‑step configuration of time sync, hostnames, SSH trust, installing required packages, and testing failover scenarios.
Introduction
Software load balancing can be achieved via two methods: OS‑based soft load and third‑party application‑based soft load. LVS is an OS‑based solution, while HAProxy is a third‑party solution. HAProxy is easier to use than LVS but, like LVS, it does not provide high availability on its own.
Related Overview
HAProxy
HAProxy is a free, high‑availability load balancer and proxy for TCP and HTTP applications. It supports thousands of concurrent connections, session persistence, and layer‑7 processing, making it suitable for high‑traffic web sites.
KeepAlived
KeepAlived uses the VRRP (Virtual Router Redundancy Protocol) to implement Linux server failover. VRRP creates a virtual IP shared by a group of routers; only one router is active at a time, and a backup automatically takes over when the active router fails.
High‑Availability Solution
Experimental topology:
<code># System environment: CentOS6.6
# Static Server: httpd
# Dynamic Server: LAMP</code>Configuration Steps
Prerequisites for HA cluster:
Time synchronization
Hostname‑based communication
SSH trust
Synchronize time (using ntpdate):
<code>[root@node1 ~]# ntpdate cn.pool.ntp.org</code>Configure hostnames and /etc/hosts on both nodes:
<code>[root@node1 ~]# vim /etc/hosts
172.16.10.123 node1.scholar.com node1
172.16.10.124 node2.scholar.com node2
[root@node1 ~]# vim /etc/sysconfig/network
HOSTNAME=node1.scholar.com
[root@node1 ~]# uname -n
node1.scholar.com
# Perform the same steps on the second node</code>Establish SSH trust between nodes:
<code>[root@node1 ~]# ssh-keygen -t rsa -P ''
[root@node1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node2
[root@node2 ~]# ssh-keygen -t rsa -P ''
[root@node2 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node1
[root@node1 ~]# date; ssh node2 'date'</code>Install required packages on both nodes:
<code>[root@node1 ~]# yum install keepalived haproxy -y
# Install on the second node as well</code>Configure KeepAlived
<code>[root@node1 ~]# vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass ab007
}
virtual_ipaddress {
192.168.12.21
}
}
vrrp_instance VI_2 {
state BACKUP
interface eth0
virtual_router_id 61
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass sr200
}
virtual_ipaddress {
192.168.12.22
}
}</code>Synchronize the configuration file to the second node:
<code>[root@node1 ~]# scp /etc/keepalived/keepalived.conf node2:/etc/keepalived/</code>Modify the configuration on the second node (swap MASTER/BACKUP roles):
<code>[root@node2 ~]# vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass ab007
}
virtual_ipaddress {
192.168.12.21
}
}
vrrp_instance VI_2 {
state MASTER
interface eth0
virtual_router_id 61
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass sr200
}
virtual_ipaddress {
192.168.12.22
}
}</code>Configure HAProxy
<code>[root@node1 ~]# vim /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
frontend proxy *:80
acl url_static path_beg -i /static /images /javascript /stylesheets
acl url_static path_end -i .jpg .gif .png .css .js
acl url_dynamic path_end -i .php .jsp
use_backend dynamic if url_dynamic
default_backend static
backend static
balance source
server s1 172.16.10.125:80 inter 1500 rise 2 fall 3 check
listen statistics
mode http
bind *:8080
stats enable
stats auth admin:admin
stats uri /admin?status
stats hide-version
stats admin if TRUE
stats refresh 3s
acl allow src 192.168.12.0/24
tcp-request content accept if allow
tcp-request content reject
backend dynamic
balance source
server s2 172.16.10.12:80 check inter 1500 rise 2 fall 3</code>Synchronize HAProxy configuration to the second node:
<code>[root@node1 ~]# scp /etc/haproxy/haproxy.cfg node2:/etc/haproxy/</code>Web Server Setup
Prepare test pages:
<code># Static server
[root@scholar ~]# vim /var/www/html/index.html
<h1>172.16.10.125</h1>
service httpd start
# Dynamic server
[root@scholar ~]# vim /var/www/html/index.php
<h1>172.16.10.20</h1>
<?php
$link = mysql_connect('127.0.0.1','root','');
if ($link) echo "Success..."; else echo "Failure...";
mysql_close();
phpinfo();
?>
service httpd start
service mysqld start</code>Start services on both nodes:
<code>[root@node1 ~]# service haproxy start; ssh node2 'service haproxy start'
[root@node1 ~]# service keepalived start; ssh node2 'service keepalived start'</code>Static/Dynamic Separation and HA Testing
View node IPs and access the web pages (images omitted for brevity). The status page shows both HAProxy and KeepAlived health. Simulate a node failure by stopping services on one node:
<code>[root@node1 ~]# service haproxy stop
[root@node1 ~]# service keepalived stop</code>Observe that the virtual IP moves to the backup node and traffic continues uninterrupted, confirming successful high‑availability and static/dynamic separation.
The experiment demonstrates that HAProxy combined with KeepAlived provides a reliable solution for web high availability and content separation. For larger deployments, additional web servers can be added and load‑balancing algorithms such as round‑robin can be used.
Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.