Smooth 2N Database Scaling and High Availability with MariaDB, Keepalived, and Sharding
This article presents five expansion strategies—shutdown, write‑stop, log‑based, dual‑write, and smooth 2N—detailing step‑by‑step procedures for MariaDB installation, master‑master replication, dynamic data‑source configuration, and Keepalived high‑availability setup, enabling seamless horizontal scaling and minimal service disruption for large‑scale databases.
The guide outlines five database expansion schemes: (1) shutdown, (2) write‑stop, (3) log‑based, (4) dual‑write for medium data, and (5) smooth 2N for large data volumes. Each scheme includes announcement, service pause, data migration, verification, configuration changes, service restoration, and rollback plans.
For smooth 2N scaling, the article describes a practical implementation: initial deployment with three databases (A, B, C) using uid % 3 sharding, then adding a fourth node D, changing the sharding rule to uid % 4 , and handling the resulting data redistribution.
MariaDB installation (section 2.1.1) :
yum -y install wget
# Backup repo file
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -P /etc/yum.repos.d/ http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all
yum makecacheConfiguration of the MariaDB YUM repository:
[mariadb]
name = MariaDB
baseurl = https://mirrors.ustc.edu.cn/mariadb/yum/10.2/centos7-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1Master‑master replication setup (section 2.1.2) includes setting server-id , binary log options, auto‑increment offsets, and creating a replica user:
GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'replica'@'%' IDENTIFIED BY 'replica';
FLUSH PRIVILEGES;
CHANGE MASTER TO master_host='192.168.116.141', master_user='replica', master_password='replica', master_port=3306, master_log_file='mysql-bin.000007', master_log_pos=374;
START SLAVE;Keepalived high‑availability configuration (section 2.1.3) defines a virtual IP and health‑check for two nodes:
global_defs {
router_id vip1
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 111
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 6666
}
virtual_ipaddress {
192.168.116.150
}
}
virtual_server 192.168.116.150 3306 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 0
protocol TCP
real_server 192.168.116.140 3306 {
weight 1
TCP_CHECK {
connect_timeout 10
retry 3
delay_before_retry 3
connect_port 3306
}
}
}The article also introduces ShardingJDBC (part of ShardingSphere) as a lightweight Java JDBC driver that supports dynamic data‑source addition, showing a code snippet for loading a new datasource and updating the sharding rule at runtime.
// Dynamic datasource addition example
Properties properties = loadProperties("datasource1.properties");
DruidDataSource druidDataSource = (DruidDataSource) DruidDataSourceFactory.createDataSource(properties);
OrchestrationShardingDataSource dataSource = SpringContextUtil.getBean("tradeSystemDataSource", OrchestrationShardingDataSource.class);
Map
dsMap = dataSource.getDataSource().getDataSourceMap();
dsMap.put(DatasourceEnum.DATASOURCE_2.getValue(), druidDataSource);
// Refresh sharding rule and trigger configuration change
dataSource.renew(new DataSourceChangedEvent("/config/schema/logic_db/datasource", dataSourceConfigMap));Finally, the guide covers adding new database VIPs, expanding the cluster with additional servers (Server3, Server4), configuring dual‑master sync for the new nodes, and updating Keepalived to include the new virtual IPs, ensuring continuous service availability during scaling operations.
Architect's Guide
Dedicated to sharing programmer-architect skills—Java backend, system, microservice, and distributed architectures—to help you become a senior architect.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.