Performance Evaluation of DTLE Parallel Replay with MTS Feature
This article examines how enabling the MTS (writeset‑based incremental parallel replay) feature in DTLE and MySQL affects data transfer speed, CPU and memory usage, and network latency, presenting test procedures, configurations, and performance results that show up to 28‑fold speed improvements.
Background
Before version 3.21.07.0, DTLE supported parallel replay only when the source MySQL executed transactions in multiple threads within the same group, making it difficult to trigger true parallel replay despite configuring target concurrency. Enabling binlog_transaction_dependency_tracking=WRITESET on the source can achieve high parallelism, but its default is COMMIT_ORDER . The article explores ways to achieve high‑parallel DTLE replay without changing MySQL configuration and evaluates the impact of the new MTS (writeset‑based incremental parallel replay) feature introduced in DTLE 3.21.07.0 and improved in 3.21.08.0.
1. Environment Preparation
1.1 Source MySQL Instance
Configuration:
binlog_transaction_dependency_tracking=COMMIT_ORDER shell> dbdeployer deploy single 5.7.31 --remote-access % --bind-address 0.0.0.0 -c skip-name-resolve -c binlog_format=ROW -c binlog_row_image=FULL -c log_slave_updates=ON -c binlog_transaction_dependency_tracking=COMMIT_ORDER --gtid --port 33061.2 Target MySQL Instance
shell> dbdeployer deploy single 5.7.31 --remote-access % --bind-address 0.0.0.0 -c skip-name-resolve -c binlog_format=ROW -c binlog_row_image=FULL -c log_slave_updates=ON --gtid --port 33061.3 DTLE Cluster (2 nodes, version 3.21.08.0)
Two DTLE instances are deployed to form a cluster.
1.4 DTLE Monitoring System
Refer to the previous article "How to Build a DTLE Monitoring System" for setup details.
2. Test Steps
2.1 Data Preparation
i. Insert 1,000,000 rows of base data into source MySQL
shell> sysbench /usr/share/sysbench/oltp_common.lua --mysql-host=10.186.16.109 --mysql-port=3306 --mysql-user=test --mysql-password=test --mysql-db=test --create_secondary=off --table-size=1000000 --tables=1 prepareii. Record the source MySQL Executed_Gtid_Set value
mysql> show master status\G;
*************************** 1. row ***************************
File: mysql-bin.000002
Position: 189905893
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set: 00003306-0000-0000-0000-000000003306:1-16,288f4ef1-1b70-11ec-8add-b083fecd2cf3:1-378iii. Copy the base data to the target MySQL
iv. Insert test data (100,000 transactions, 600,000 queries) into source MySQL
shell> sysbench /usr/share/sysbench/oltp_write_only.lua --mysql-host=10.186.16.109 --mysql-port=3306 --mysql-user=test --mysql-password=test --mysql-db=test --report-interval=10 --table-size=1000000 --tables=1 --time=0 --threads=1 --events=100000 run2.2 Run QPS Calculation Script
# coding=utf-8
import time
from datetime import datetime, timedelta
import pymysql
query_1 = "SHOW MASTER STATUS;"
queries = 600000
run_time = datetime.now()
print(f"RUN TIME: {run_time}")
with pymysql.connect(host='10.186.16.117', port=3306, user='test', passwd='test', autocommit=True) as db:
with db.cursor(pymysql.cursors.DictCursor) as cursor:
cursor.execute(query_1)
data = cursor.fetchone()
init = data['Executed_Gtid_Set']
while True:
cursor.execute(query_1)
data = cursor.fetchone()
if data['Executed_Gtid_Set'] != init:
start_time = datetime.now()
print(f"START TIME: {start_time}")
break
time.sleep(1)
equal_times = 0
last_gtid = ''
# 5 seconds without GTID change means transfer finished
while equal_times <= 5:
cursor.execute(query_1)
data = cursor.fetchone()
current_gtid = data['Executed_Gtid_Set']
print(f"CURRENT GTID: {current_gtid}")
if current_gtid == last_gtid:
equal_times += 1
else:
equal_times = 0
last_gtid = current_gtid
time.sleep(1)
end_time = datetime.now() - timedelta(seconds=5)
print(f"END TIME: {end_time}")
diff = (end_time - start_time).seconds
print(f'耗时: {diff}')
print(f'QPS: {queries / diff}')2.3 Create Incremental Task Without MTS
job "no-mts" {
datacenters = ["dc1"]
group "Src" {
affinity {
attribute = "${node.unique.name}"
value = "dtle-src-1"
}
task "src" {
driver = "dtle"
config {
Gtid = "00003306-0000-0000-0000-000000003306:1-16, 288f4ef1-1b70-11ec-8add-b083fecd2cf3:1-378"
ReplicateDoDb = [{ TableSchema = "test" }]
ConnectionConfig = {
Host = "10.186.16.109"
Port = 3306
User = "test"
Password = "test"
}
}
}
}
group "Dest" {
affinity {
attribute = "${node.unique.name}"
value = "dtle-dest-1"
}
task "dest" {
driver = "dtle"
config {
ParallelWorkers = 32
UseMySQLDependency = true
ConnectionConfig = {
Host = "10.186.16.117"
Port = 3306
User = "test"
Password = "test"
}
}
}
}
}3. Additional Test Scenarios
3.1 MySQL without MTS, DTLE with MTS
Set UseMySQLDependency=false in the DTLE task.
3.2 MySQL with MTS, DTLE without MTS
Redeploy source MySQL with:
-c binlog_transaction_dependency_tracking=WRITESET -c transaction_write_set_extraction=XXHASH643.3 Both MySQL and DTLE enable MTS
Use the configurations from 3.2 and set UseMySQLDependency=false in DTLE.
3.4 Add network latency between target DTLE and target MySQL
Introduce artificial latency to compare performance.
4. Test Results
MySQL Config
DTLE Config
Network Latency
QPS
CPU Source
CPU Target
Memory Source
Memory Target
Bandwidth
MySQL:off
MTS:off
0ms
2575
58%
35%
138MiB
162MiB
2.21Mib/s
MySQL:off
MTS:off
20ms
41
2%
2%
137MiB
165MiB
36.5Kib/s
MySQL:off
MTS:on
0ms
6666
138%
70%
139MiB
158MiB
5.94Mib/s
MySQL:off
MTS:on
20ms
1140
23%
14%
138MiB
170MiB
1Mib/s
MySQL:on
MTS:off
0ms
6593
135%
73%
138MiB
167MiB
5.79Mib/s
MySQL:on
MTS:off
20ms
1158
23%
15%
139MiB
169MiB
1Mib/s
MySQL:on
MTS:on
0ms
6976
134%
73%
139MiB
171MiB
6.26Mib/s
MySQL:on
MTS:on
20ms
1145
21%
14%
139MiB
167MiB
1Mib/s
5. Conclusions
Enabling MTS on either MySQL or DTLE dramatically improves DTLE transfer speed.
When data changes are independent, MTS can increase speed by 2.6× without network latency and up to 28× with 20 ms latency.
MTS does not noticeably affect memory usage, but CPU usage roughly doubles.
Minimizing network latency between target DTLE and target MySQL further boosts transfer performance.
Aikesheng Open Source Community
The Aikesheng Open Source Community provides stable, enterprise‑grade MySQL open‑source tools and services, releases a premium open‑source component each year (1024), and continuously operates and maintains them.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.