Databases 11 min read

Getting Started with dtle: Installation, Configuration, and MySQL Data Migration Guide

This guide introduces dtle, an open‑source MySQL data‑transfer middleware, covering download, installation on three nodes, detailed dtle.conf configuration, startup commands, creating source and target users, preparing job.json, using HTTP API to create and monitor migration jobs, and verifying data migration results.

Aikesheng Open Source Community
Aikesheng Open Source Community
Aikesheng Open Source Community
Getting Started with dtle: Installation, Configuration, and MySQL Data Migration Guide

dtle is an open‑source MySQL data‑transfer middleware developed by the ActionTech community. The article provides a step‑by‑step tutorial for downloading, installing, configuring, and using dtle to migrate MySQL data.

Project address : https://github.com/actiontech/dtle

1. Download and install

Download the latest release RPM package:

wget https://github.com/actiontech/dtle/releases/download/v2.19.11.0/dtle-2.19.11.0.x86_64.rpm

Installation instructions are available at https://actiontech.github.io/dtle-docs-cn/4/4.0_installation.html . Install the package on three hosts named node4 , node5 , and node6 . The first two act as managers, all three act as agents.

2. Configure dtle.conf

Each node requires a customized dtle.conf . Below are the essential sections (paths: /opt/dtle/etc/dtle/dtle.conf ).

node4 configuration :

# Setup data dir
data_dir = "/opt/dtle/data"
log_level = "DEBUG"
log_file = "/opt/dtle.log"
bind_addr = "172.100.9.4"
ports {    http = 8190 }
manager {    enabled = true    join = [ "172.100.9.4","172.100.9.5" ] }
agent {    enabled = true    managers = ["172.100.9.4:8191","172.100.9.5:8191"] }
metric{   collection_interval = "15s"   publish_allocation_metrics = "true"   publish_node_metrics = "true" }
addresses {    http = "172.100.9.4"    rpc = "172.100.9.4"    serf = "172.100.9.4" }
advertise {    http = "172.100.9.4"    rpc = "172.100.9.4"    serf = "172.100.9.4" }

node5 configuration (similar to node4, bind address 172.100.9.5 and manager list includes node5):

# Setup data dir
data_dir = "/opt/dtle/data"
log_level = "DEBUG"
log_file = "/opt/dtle.log"
bind_addr = "172.100.9.5"
ports {    http = 8190 }
manager {    enabled = true    join = [ "172.100.9.4","172.100.9.5" ] }
agent {    enabled = true    managers = ["172.100.9.4:8191","172.100.9.5:8191"] }
metric{   collection_interval = "15s"   publish_allocation_metrics = "true"   publish_node_metrics = "true" }
addresses {    http = "172.100.9.5"    rpc = "172.100.9.5"    serf = "172.100.9.5" }
advertise {    http = "172.100.9.5"    rpc = "172.100.9.5"    serf = "172.100.9.5" }

node6 configuration (manager disabled, bind address 172.100.9.6 ):

# Setup data dir
data_dir = "/opt/dtle/data"
log_level = "DEBUG"
log_file = "/opt/dtle.log"
bind_addr = "172.100.9.6"
ports {    http = 8190 }
manager {    enabled = false    join = [ "172.100.9.4","172.100.9.5" ] }
agent {    enabled = true    managers = ["172.100.9.4:8191","172.100.9.5:8191"] }
metric{   collection_interval = "15s"   publish_allocation_metrics = "true"   publish_node_metrics = "true" }
addresses {    http = "172.100.9.6"    rpc = "172.100.9.6"    serf = "172.100.9.6" }
advertise {    http = "172.100.9.6"    rpc = "172.100.9.6"    serf = "172.100.9.6" }

3. Start dtle

After configuration, start the dtle service on each node. Detailed start commands are documented at https://actiontech.github.io/dtle-docs-cn/4/4.2_command.html . Verify that the three dtle processes are running.

4. Prepare source and target MySQL instances

Create migration users on both source and target MySQL servers and grant the minimal privileges required (see user privileges guide ).

Load test data into the source database and note the target database state before migration.

5. Create a migration job

The job definition is a JSON file (job.json) that describes a synchronous full‑plus‑incremental migration. A minimal example:

{
    "Name":"have_a_try",
    "Failover":false,
    "Orders":[],
    "Type":"synchronous",
    "Tasks":[
        {
            "Type":"Src",
            "NodeId":"ee97dc49-85ed-febc-4d3c-cfbfa87f46bd",
            "Config":{
                "Gtid":"",
                "DropTableIfExists":false,
                "SkipCreateDbTable":false,
                "ApproveHeterogeneous":true,
                "ReplChanBufferSize":"600",
                "ChunkSize":"2000",
                "MsgBytesLimit":"20480",
                "MsgsLimit":"65536",
                "BytesLimit":"67108864",
                "GroupMaxSize":"1",
                "GroupTimeout":"100",
                "ReplicateDoDb":[{"TableSchema":"test","Tables":[{"TableName":"test1"}]}],
                "ConnectionConfig":{
                    "Host":"172.100.9.1",
                    "Port":"3306",
                    "User":"src_test",
                    "Password":"test"
                }
            }
        },
        {
            "Type":"Dest",
            "NodeId":"e623aedd-5c37-da67-4ddf-1a82ce1ac298",
            "Config":{
                "ParallelWorkers":"1",
                "ConnectionConfig":{
                    "Host":"172.100.9.2",
                    "Port":"3306",
                    "User":"dest_test",
                    "Password":"test"
                }
            }
        }
    ],
    "ModifyIndex":2372,
    "Status":"running"
}

Submit the job via the dtle HTTP API (POST to /v1/job ) as described at https://actiontech.github.io/dtle-docs-cn/4/4.4_http_api.html . The API returns a job ID; you can query /v1/job/{id} to monitor status, which should become running .

6. Verify migration

After the job finishes, compare the target database schema and data with the source. The article shows screenshots of the new test database and tables created by the job, confirming successful migration.

Conclusion

For more usage details, refer to the full dtle documentation at https://actiontech.github.io/dtle-docs-cn/ . The community encourages readers to try the tool and join the official QQ technical group (852990221) for further assistance.

data migrationMiddlewareconfigurationMySQLJobHTTP APIDTLE
Aikesheng Open Source Community
Written by

Aikesheng Open Source Community

The Aikesheng Open Source Community provides stable, enterprise‑grade MySQL open‑source tools and services, releases a premium open‑source component each year (1024), and continuously operates and maintains them.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.