Deploying a Percona XtraDB Cluster (PXC) with Docker and HAProxy Load Balancing
This guide explains how to set up a Percona XtraDB Cluster (PXC) using Docker containers, configure five-node replication with strong consistency, and implement HAProxy as a TCP load balancer to achieve high availability and balanced read/write traffic across the cluster.
Cluster方案 (Cluster Options)
1. Replication – fast but only weak consistency, suitable for low‑value data such as logs, posts, news. Uses a master‑slave structure; writes go to master and are synced to slaves, but slaves cannot write back to master. Asynchronous replication may return success to the client before slaves are fully synced.
2. PXC (Percona XtraDB Cluster) – slower but provides strong consistency, ideal for high‑value data like orders, customers, payments. Data sync is bidirectional; any node can read/write and changes are propagated to all nodes. Synchronous replication ensures a transaction is committed on all nodes before the client receives success.
Installation of PXC Cluster
1. Pull the Docker image
docker pull percona/percona-xtradb-cluster:5.7.332. Tag the image for brevity
docker tag percona/percona-xtradb-cluster:5.7.33 pxc
# Remove original image
docker rmi percona/percona-xtradb-cluster:5.7.333. Create an internal Docker network (net1) for security
# Create network
docker network create --subnet=172.18.0.0/24 net1
# Inspect network
# docker network inspect net1
# Remove network
# docker network rm net14. Create five Docker volumes (v1‑v5) because the PXC containers cannot directly access host files
docker volume create v1
docker volume create v2
docker volume create v3
docker volume create v4
docker volume create v5
# View volume details
# docker inspect v1
# Remove volume
# docker volume rm v15. Launch five PXC nodes (wait ~1 minute after the first node before creating the next)
# Node 1
docker run -d --name=mysql-node1 -p 3310:3306 --privileged=true -e MYSQL_ROOT_PASSWORD=123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -v v1:/var/lib/mysql --net=net1 --ip 172.18.0.2 pxc
# Node 2
docker run -d --name=mysql-node2 -p 3311:3306 --privileged=true -e MYSQL_ROOT_PASSWORD=123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=mysql-node1 -v v2:/var/lib/mysql --net=net1 --ip 172.18.0.3 pxc
# Node 3
docker run -d --name=mysql-node3 -p 3312:3306 --privileged=true -e MYSQL_ROOT_PASSWORD=123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=mysql-node1 -v v3:/var/lib/mysql --net=net1 --ip 172.18.0.4 pxc
# Node 4
docker run -d --name=mysql-node4 -p 3313:3306 --privileged=true -e MYSQL_ROOT_PASSWORD=123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=mysql-node1 -v v4:/var/lib/mysql --net=net1 --ip 172.18.0.5 pxc
# Node 5
docker run -d --name=mysql-node5 -p 3314:3306 --privileged=true -e MYSQL_ROOT_PASSWORD=123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=mysql-node1 -v v5:/var/lib/mysql --net=net1 --ip 172.18.0.6 pxc6. Test the cluster by connecting with Navicat to any node and performing CRUD operations; changes should replicate to all other nodes.
HAProxy Load Balancing
Without a load balancer, a single node handles all traffic, leading to high load and poor performance. HAProxy distributes requests evenly across the five nodes, reducing per‑node load and improving throughput.
1. Pull HAProxy image
docker pull haproxy:2.3.132. Create configuration directory
mkdir -p /home/apps/haproxy3. Create haproxy.cfg with the following content
global
chroot /usr/local/etc/haproxy
log 127.0.0.1 local5 info
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
# Admin stats UI
listen admin_stats
bind 0.0.0.0:8888
mode http
stats uri /dbs
stats realm Global\ statistics
stats auth admin:123456
# MySQL load balancing
listen proxy-mysql
bind 0.0.0.0:3306
mode tcp
balance roundrobin
option tcplog
option mysql-check user haproxy
server mysql-node1 172.18.0.2:3306 check weight 1 maxconn 2000
server mysql-node2 172.18.0.3:3306 check weight 1 maxconn 2000
server mysql-node3 172.18.0.4:3306 check weight 1 maxconn 2000
server mysql-node4 172.18.0.5:3306 check weight 1 maxconn 2000
server mysql-node5 172.18.0.6:3306 check weight 1 maxconn 2000
option tcpka4. Create a MySQL user for HAProxy health checks (no password, no privileges)
# Enter the first MySQL container
docker exec -it mysql-node1 /bin/bash
# Login to MySQL
mysql -uroot -p123456
# Create user
create user 'haproxy'@'%' identified by '';5. Run the HAProxy container
docker run -it -d --name haproxy-node1 -p 4001:8888 -p 4002:3306 --restart always --privileged=true -v /home/apps/haproxy:/usr/local/etc/haproxy --net=net1 --ip 172.18.0.7 haproxy:2.3.136. Start HAProxy inside the container
# Enter container
docker exec -it haproxy-node1 /bin/bash
# Launch HAProxy with the config file
haproxy -f /usr/local/etc/haproxy/haproxy.cfgAccess Testing
1. Admin UI: open http:// :4001/dbs (user: admin, password: 123456).
2. Database access: connect Navicat to IP:4002 (HAProxy forwards to the cluster).
3. Simulate node failure by stopping one or more MySQL containers; the remaining nodes and the HAProxy proxy continue to serve traffic.
Node Failure or Restart Procedures
Slave node failure : if the primary node is still alive, simply restart the stopped container; data will auto‑sync.
Primary node failure :
If the primary was the last node to leave the cluster (its data is up‑to‑date), restart it after setting safe_to_bootstrap: 1 in grastate.dat .
If other nodes are still running and the primary’s data is stale, remove the primary container, keep its volume, and re‑join it as a slave using -e CLUSTER_JOIN=other‑node .
Alternative recovery: delete all containers and the grastate.dat files in the volumes, then recreate the cluster from scratch (risk of data loss if the former primary held the latest data).
References
https://www.cnblogs.com/wanglei957/p/11819547.html
https://www.cnblogs.com/wangbiaohistory/p/14638935.html
Architecture Digest
Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.