Databases 11 min read

Full Data Import and Export Between MySQL and TiDB Using dumper, dumpling, and tidb-lightning

This article demonstrates how to export full data from TiDB to MySQL and vice‑versa using TiDB's dumper/dumpling tools, MySQL's mysqldump, and the tidb-lightning importer, covering both SQL and CSV formats, scripts for automation, and verification steps for large tables.

Aikesheng Open Source Community
Aikesheng Open Source Community
Aikesheng Open Source Community
Full Data Import and Export Between MySQL and TiDB Using dumper, dumpling, and tidb-lightning

MySQL and TiDB share about 80% syntax compatibility, allowing them to be used interchangeably in many scenarios; TiDB can act as an upstream for MySQL and MySQL can act as an upstream for TiDB. This guide shows how to move full data between the two databases.

Part 1: TiDB as upstream, MySQL as downstream. TiDB does not export tables directly to CSV, but provides tools: mydumper (for SQL) and dumpling (for SQL and CSV). Example commands:

[root@ytt-pc data_sql]# mydumper -u root -h 127.0.0.1 -P 4001 -B ytt -T t1 -F 256 -o /tmp/data_sql/

The above creates several 256 MB SQL files. To export CSV:

[root@ytt-pc data_csv]# dumpling -B ytt -T ytt.t1_csv -u root -P4001 -h 127.0.0.1 --filetype csv --filesize 256M -o /tmp/data_csv/

A simple shell script imports the generated files into MySQL:

#!/bin/sh
usage(){
    echo ""
    echo "Usage: ./source_tidb_to_mysql csv or sql"
    echo ""
}
file_format=$1
file_path_csv=/tmp/data_csv/
file_path_sql=/tmp/data_sql/
if [ "$file_format" = "csv" ]; then
    for i in `ls "$file_path_csv"ytt*.csv`
    do {
        load_options="load data infile '$i' into table t1_csv fields terminated by ',' enclosed by '"' ignore 1 lines"
        mysql -udumper -S /tmp/mysql_sandbox5731.sock -D ytt -e "$load_options"
    }
    done
elif [ "$file_format" = "sql" ]; then
    for i in `ls "$file_path_sql"ytt.t1.*.sql`
    do {
        mysql -udumper -S /tmp/mysql_sandbox5731.sock -D ytt < $i
    }
    done
else
    usage;
fi

Running the script with ./source_tidb_to_mysql sql and ./source_tidb_to_mysql csv loads the tables, and a quick count confirms both tables contain 2,000,000 rows.

Part 2: MySQL as upstream, TiDB as downstream. To avoid post‑processing of MySQL dumps, TiDB's dumpling tool can directly export MySQL tables. Example commands:

[root@ytt-pc data_csv]# dumpling -B ytt -T ytt.t1_csv -udumper -P5731 -h 127.0.0.1 --filetype csv --filesize 256M -o /tmp/data_csv/
[root@ytt-pc data_sql]# dumpling -B ytt -T ytt.t1 -udumper -P5731 -h 127.0.0.1 --filetype sql --filesize 256M -o /tmp/data_sql/

The exported data is then imported into TiDB using tidb-lightning , which requires the tikv-importer service to be running (default port 8287) and the lightning client (default status port 8289).

# Start tikv-importer
[root@ytt-pc data_sql]# tikv-importer -A 127.0.0.1:8287
# Import t1 (SQL files)
[root@ytt-pc data_sql]# tidb-lightning --importer 127.0.0.1:8287 --status-addr 127.0.0.1:8289 --tidb-host 127.0.0.1 --tidb-port 4001 --tidb-status 10081 --tidb-user root -d /tmp/data_sql/
# Import t1_csv (CSV files)
[root@ytt-pc data_sql]# tidb-lightning --importer 127.0.0.1:8287 --status-addr 127.0.0.1:8289 --tidb-host 127.0.0.1 --tidb-port 4001 --tidb-status 10081 --tidb-user root -d /tmp/data_csv/

After import, a count query again shows 2,000,000 rows in each table, confirming successful migration.

For small tables (e.g., 10,000 rows), a simpler approach is to use MySQL's SELECT ... INTO OUTFILE to generate a CSV and then load it directly into TiDB with LOAD DATA LOCAL INFILE :

mysql> SELECT * FROM t1_small INTO OUTFILE '/tmp/data_csv/t1_small.csv' FIELDS TERMINATED BY ',' ENCLOSED BY '"';
mysql> LOAD DATA LOCAL INFILE '/tmp/data_csv/t1_small.csv' INTO TABLE t1_small FIELDS TERMINATED BY ',' ENCLOSED BY '"';

The article concludes with references to related MySQL 8.0 feature deep‑dives.

data migrationMySQLTiDBCSV ExportDumperDumplingtidb-lightning
Aikesheng Open Source Community
Written by

Aikesheng Open Source Community

The Aikesheng Open Source Community provides stable, enterprise‑grade MySQL open‑source tools and services, releases a premium open‑source component each year (1024), and continuously operates and maintains them.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.