Practical Experience Migrating Over 130 MySQL Instances to Tencent Cloud CDB
Tencent Cloud’s migration team successfully moved over 130 on‑premise MySQL instances totaling 20 TB to CDB by rigorously testing the migration tool, configuring master‑slave replication, handling high‑concurrency binlog sync, and resolving timestamp, stored‑procedure, and DDL issues, ensuring seamless production deployment.
Author: Wang Liang, Senior Engineer at Tencent Cloud. Joined Tencent in 2010, previously responsible for CDN image services and dynamic acceleration operations, now focuses on database product solutions.
A large domestic game developer operates more than 130 IDC‑deployed MySQL instances with a total storage of 20 TB. To meet business requirements, all instances were migrated to Tencent Cloud CDB for MySQL. The Tencent Cloud database team investigated the migration workflow and tools, identified four major problems, and resolved them in time. The following is a detailed migration experience report.
1. Test Cases / Process
The developer offers several migration options. Since all MySQL instances have external proxy IPs, the migration tool provided by Tencent Cloud was chosen to import data directly.
Migration tool principle: The tool obtains high‑privilege credentials from the source instance, synchronizes basic MySQL configuration to the target CDB instance, uses mysqldump to export data and transfer it to CDB for import, and establishes a master‑slave relationship to sync new data. Communication between the source IDC and the CDB instance is performed via NAT through a server with an external IP.
1.1 Migration Tool Basic Functions
In the Tencent Cloud Database console, a migration task is created following the guided steps, and the backend page is used to monitor task logs and metrics such as proxy traffic and CDB write throughput.
Knowledge point: generating large test data with mysql_gen_data . The data generation and import process is as follows:
#!/bin/bash
./mysql_gen_data -f "%1n,%100s,%100s,%100s,%100s,%100s" \
-n 10000000 >random.dat
mysql –uroot –p*********** –e “create database cdbtest;use cdbtest; \
CREATE TABLE cdbtest_tb \
(c1 varchar(100),c2 varchar(100),c3 varchar(100), \
c4 varchar(100),c5 varchar(100),c6 varchar(100)) \
ENGINE=InnoDB DEFAULT CHARSET=utf8;”
for i in {1..10}; do
echo "$(date '+%T') round $i start"
echo "prepare data..."
sed -i "s/^/$i/" random.dat
echo "insert data..."
mysql -uroot –p******** cdbtest -e "LOAD DATA local INFILE '/data/random.dat' into table cdbtest_tb fields terminated by',';"
echo "$(date '+%T') round $i end"
done2. Master‑Slave and CDB Synchronization
The developer used their own IDC slaves to replicate data to CDB. The simple topology is shown in the diagram (omitted). The MySQL master configuration used for the test is:
server_id = 98
log_bin = binlog
binlog_format = ROW
innodb_stats_on_metadata = offAfter establishing the replication, both the console and the backend confirmed successful migration.
3. Multi‑Instance + Large Binlog Concurrent Synchronization
To simulate a high‑concurrency scenario, 15 MySQL instances were run on a single server, each mapped to a different port, and each instance was paired with a separate CDB instance for master‑slave replication. The concurrent migration generated roughly 100 GB of binlog per day.
Knowledge point: creating multiple MySQL instances on one server using the built‑in mysqld_multi script. Example configuration ( /etc/my.conf ) is shown below:
[mysqld_multi]
mysqld = /usr/bin/mysqld_safe
mysqladmin = /usr/bin/mysqladmin
user = root
password = ******
[mysqld1]
port = 3306
... (other parameters)
[mysqld2]
port = 3312
... (other parameters)
[mysqld3]
........
[mysqld4]
.......Instances are started with mysqld_multi start 1-4 . Screenshots (omitted) show the running instances.
A scheduled update script creates a large amount of binlog (≈700 MB per run, executed every 2 hours, totaling ~126 GB per day):
#!/bin/sh
SET_STRING=`date +"%s"`
LOG_NAME="/data/log/update.log"
NOW_STRING=`date +"[%Y-%m-%d %H:%M:%S]"`
for i in {12..26}
do
BEGIN_TIME=`date +"[%Y-%m-%d %H:%M:%S]"`
echo ${BEGIN_TIME}" Update data in this time is:"$SET_STRING >> $LOG_NAME
echo ${BEGIN_TIME}" Update database"${i}" start..." >> $LOG_NAME
mysql -uroot migrate${i} -S /data/mysql${i}/mysql.sock -e "update tb set data1=\"${SET_STRING}\""
END_TIME=`date +"[%Y-%m-%d %H:%M:%S]"`
echo ${END_TIME}" Update database"${i}" end..." >> $LOG_NAME
doneFifteen migration tasks were created using the database migration tool, and both the console and backend confirmed successful completion.
A verification script periodically compares data between the source IDC and the target CDB to ensure consistency:
#!/bin/sh
DATA_CORRECT=$1
NOW_TIME=`date +"[%Y-%m-%d %H:%M:%S]"`
cat my.file | while read line
do
IP=`echo $line | awk -F " " '{print $1}'`
PORT=`echo $line | awk -F " " '{print $2}'`
DATABASE=`echo $line | awk -F " " '{print $3}'`
DATA_INBASE=`mysql -uroot -P${PORT} -h${IP} -p123456cdb ${DATABASE} -e "select data1 from tb limit 1\G" | grep data1 | awk -F " " '{print $2}'`
echo ${NOW_TIME}"[INFO]Data you want to update to ${DATABASE} is:"$DATA_CORRECT
echo ${NOW_TIME}"[INFO]Data from Database "${DATABASE}" is:"$DATA_INBASE
if [ $DATA_INBASE -eq $DATA_CORRECT ]
then
echo ${NOW_TIME}"[SUCCESS]"${DATABASE}" update successfully!"
else
echo ${NOW_TIME}"[ERROR]"${DATABASE}" update ERROR!"
fi
done4. Issues Encountered
Initial master‑slave creation failed with "Error: Can’t connect to MySQL server on 10.*.*.*". The cause was a time‑stamp mismatch between the CDB proxy and the IDC NAT machine, leading to packet drops. Fix: disable TCP timestamps and TCP TW recycle on the IDC server: net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_tw_recycle = 0
Cross‑version migration of stored procedures failed with "ERROR: Can’t load from mysql.proc. The table is probably corrupted". Resolution: skip the proc table during migration for version‑incompatible instances.
Creating a new database during migration caused binlog import failure ("errno:1049: Error ‘Unknown database ‘xxxx’ on query."). The issue arose because DDL operations introduced a new database that the migration task was not aware of. Solution: avoid DDL operations during migration.
5. Summary
Thorough pre‑migration testing—including functional, performance, and boundary‑condition tests—ensured that the production migration proceeded without data inconsistency or operational failures. To date, more than 130 MySQL instances have been successfully migrated to Tencent Cloud CDB and are running in production. The experience demonstrates our capability to provide reliable, high‑efficiency database migration services for Tencent Cloud users.
Tencent Cloud Developer
Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.