Performance Evaluation of DTLE under Varying Network Latency and Bandwidth Conditions
This article presents a systematic performance test of the DTLE data replication tool, using sysbench, tc, and iperf3 to simulate different network latency and bandwidth scenarios, and compares DTLE's replication delay and bandwidth usage against native MySQL replication across three test cases.
The test environment uses sysbench to generate insert‑update‑delete load on ten 10,000‑row tables and the Linux tc tool to emulate high‑latency, low‑bandwidth network conditions. Two DTLE servers (10.186.63.20 and 10.186.63.145) and two database servers (10.186.18.123 and 10.186.18.117) are prepared, with network limits applied on both source and target sides.
Tool preparation:
tc – traffic control for bandwidth and delay simulation (see manual )
iperf3 – network bandwidth verification ( repo )
sysbench – data‑pressure generator ( repo )
Network‑limit script (bash):
#!/usr/bin/env bash
# Name of the traffic control command.
TC=`which tc`
# Interface to limit
IF=eth0
# Download/Upload limits
DNLD=2mbit
UPLD=2mbit
# Host IP
IP=10.186.63.145
# Network latency
DELAY=125ms
U32="${TC} filter add dev ${IF} protocol ip parent 1:0 prio 1 u32"
${TC} qdisc add dev ${IF} root handle 1: htb default 1
${TC} class add dev ${IF} parent 1: classid 1:10 htb rate ${DNLD}
${TC} class add dev ${IF} parent 1: classid 1:20 htb rate ${UPLD}
${TC} qdisc add dev ${IF} parent 1:10 handle 10: netem delay ${DELAY}
${TC} qdisc add dev ${IF} parent 1:20 handle 20: netem delay ${DELAY}
${U32} match ip dst ${IP}/32 flowid 1:10
${U32} match ip src ${IP}/32 flowid 1:20Verification steps include ping tests between DTLE source and target servers and bandwidth checks using iperf3 -s on the target and iperf3 -c <target_ip> on the source.
Scenario 1 – Replication delay under different network latencies: With 2 Mbits/s bandwidth and 300 QPS load (≈1.47 Mbit/s binlog), latency is varied via the tc script. GroupTimeout is set to twice the network delay minus 10 ms, and GroupMaxSize to 512 KB. Results show replication delay stays within 2 seconds for all latencies, but linearly increases when bandwidth is insufficient.
Scenario 2 – Extreme‑bandwidth test, comparing MySQL native replication and DTLE: Bandwidth limited to 2 Mbits/s and latency to 250 ms. Parameters: GroupTimeout = 490, GroupMaxSize = 1 MB, ReplChanBufferSize = 600. DTLE sustains up to 2.7 Mbit/s pressure, while MySQL native replication caps around 1.8 Mbit/s, demonstrating DTLE’s advantage in narrow‑bandwidth environments.
Scenario 3 – Unlimited bandwidth, latency 250 ms: Both MySQL native replication and DTLE are tested under increasing data pressure. DTLE consistently uses less network bandwidth, with its peak bandwidth consumption roughly one‑third of MySQL’s, confirming DTLE’s efficiency in bandwidth utilization.
Overall conclusions:
DTLE maintains replication delay under 2 seconds across a range of latencies when bandwidth is adequate.
In bandwidth‑constrained settings, DTLE achieves higher sustainable data‑pressure than MySQL native replication.
DTLE’s grouping and compression mechanisms reduce bandwidth consumption, making it well‑suited for narrow‑band scenarios.
Aikesheng Open Source Community
The Aikesheng Open Source Community provides stable, enterprise‑grade MySQL open‑source tools and services, releases a premium open‑source component each year (1024), and continuously operates and maintains them.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.