Databases 16 min read

How to Import Large Volumes of Data into MySQL: Tools, Commands, and Performance Tips

This article compares various MySQL data import/export methods—including mysqldump, mydumper, SELECT OUTFILE, and MySQL Shell utilities—explaining their commands, configuration options, performance results, and best‑practice recommendations based on testing with a 10‑million‑row table.

Aikesheng Open Source Community
Aikesheng Open Source Community
Aikesheng Open Source Community
How to Import Large Volumes of Data into MySQL: Tools, Commands, and Performance Tips

1. Requirement Background

Application developers need to export and import large amounts of data, asking the DBA: How to import massive data into MySQL? The current approaches are:

mysqldump

SELECT OUTFILE statement

Graphical tools (MySQL Workbench, Navicat, DBeaver)

The DBA initially permits the familiar methods and promises later suggestions.

2. Solution Preparation

Tested tools: mysqldump, mydumper, SELECT OUTFILE, Util.dumpTables , Util.exportTable .

Environment Configuration

Configuration Item

Description

MySQL version

5.7.39

Disk random R/W

100 MiB/sec

Test table

test.t_order_info

Row count

10 000 000

Column count

6

Table Definition

CREATE TABLE `t_order_info` (
  `ID` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增主键ID',
  `order_no` varchar(64) NOT NULL DEFAULT '0000' COMMENT '订单编号',
  `order_status` varchar(2) NOT NULL DEFAULT '01' COMMENT '订单状态: 00-异常、01-待处理、02-进行中、03-已完成',
  `flag` tinyint(4) NOT NULL DEFAULT '1' COMMENT '删除标识: 1-正常、0-逻辑删除',
  `create_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
  `modify_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '更新时间',
  PRIMARY KEY (`ID`),
  UNIQUE KEY `IDX_ORDER_NO` (`order_no`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4 COMMENT='订单表';

Export Files

Backup files containing structure and data (mysqldump, mydumper, Util.dumpTables)

Data‑only files (SELECT OUTFILE, Util.exportTable)

Export / Import Commands

Export

Import

mysqldump

source or mysql< xxx.sql

mydumper

myloader

select outfile

load data

Util.dumpTables

Util.loadDump

Util.exportTable

Util.importTable

3. Solution Testing

The main goal is to improve import efficiency, adding MySQL Shell usage.

mysqldump

Single‑table export (backup file)

mysqldump --default-character-set=utf8mb4 --master-data=2 --single-transaction --set-gtid-purged=off --hex-blob --tables test t_order_info

--master-data=2 locks tables and records binlog position; omit if replication is not needed.

--single-transaction ensures a consistent snapshot without locking tables.

Backup file notes

No CREATE DATABASE because it is a single‑table dump.

Be careful with DROP TABLE statements; they may delete data unintentionally.

INSERT statements lack column names; target table must have identical structure.

During import a LOCK TABLES … WRITE is issued, preventing writes.

ALTER TABLE … DISABLE KEYS speeds up bulk inserts; re‑enable after import.

Useful parameters to avoid the above issues:

--no-create-info – skip table‑creation statements.

--skip-add-drop-database – skip DROP DATABASE.

--skip-add-drop-table – skip DROP TABLE.

--skip-add-locks – omit LOCK TABLES.

--complete-insert – include column names in INSERT.

Performance

Import with mysql< xxx.sql : ~5 minutes.

Import with source xxx.sql : ~10 minutes.

Recommendation: use the first (single‑thread) method for small datasets.

mydumper

Version 0.14.4, multi‑threaded export.

Multi‑threaded export

mydumper -u admin -p 123456 -P 3306 -h 127.0.0.1 -t 8 --trx-consistency-only -G -E -R --skip-tz-utc --verbose=3 --compress --no-schemas --rows=1000000 -T test.t_order_info -o /backup

Generates *.dat (CSV data) and *.sql (LOAD DATA statements).

Multi‑threaded import

myloader -u admin -p 123456 -P 3306 -h 127.0.0.1 --enable-binlog -t 8 --verbose=3 -B test -d /backup

Performance: ~2 minutes for >50 GB data; recommended for very large datasets.

SELECT OUTFILE

Tip: suitable for single‑table export; does not support multi‑table.
SELECT * FROM test01.t_order_info INTO OUTFILE "/data/mysql/3306/tmp/t_order_info0630_full.csv" CHARACTER SET utf8mb4 FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\'' LINES TERMINATED BY '\n';

Import command (≈3 minutes):

mysql -uadmin -P3306 -h127.0.0.1 -p123456 --local-infile
LOAD DATA LOCAL INFILE '/data/mysql/3306/tmp/t_order_info0630_full.csv' INTO TABLE test.t_order_info CHARACTER SET utf8mb4 FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\'' LINES TERMINATED BY '\n';

Advantages: flexible column selection, fast for <10 GB.

MySQL Shell – dumpTables / loadDump

util.dumpTables("test", ["t_order_info"], "/backup")

Supports partial export with a where clause.

util.dumpTables("test", ["t_order_info"], "/backup", {"where": {"test.t_order_info": "create_time>'2023-01-02'"}})

Import (≈3 minutes):

util.loadDump("/backup")
Note: does not support partial import or cross‑version imports.

MySQL Shell – exportTable / importTable

util.exportTable("test.t_order_info", "/backup/t_order_info.csv", {defaultCharacterSet:"utf8mb4", fieldsOptionallyEnclosed:true, fieldsTerminatedBy:",", linesTerminatedBy:"\n", fieldsEnclosedBy:'"', showProgress:true, dialect:"csv"})

Import (≈10 minutes):

util.importTable("/backup/t_order_info.csv", {characterSet:"utf8mb4", dialect:"csv", fieldsEnclosedBy:'"', fieldsOptionallyEnclosed:true, fieldsTerminatedBy:",", linesTerminatedBy:"\n", schema:"test", table:"t_order_info"})

Single‑threaded export, multi‑threaded import; may require retry on lock‑wait timeouts.

4. Summary

Choose the tool based on data size:

Export

Import

Pros

Recommendation (Efficiency)

mysqldump

source / mysql<

Native, remote‑friendly

⭐⭐⭐ – data < 10 GB

mydumper

myloader

Multi‑threaded

⭐⭐⭐ – data > 50 GB

SELECT OUTFILE

LOAD DATA

Most flexible

⭐⭐ – data < 20 GB

Util.dumpTables

Util.loadDump

Native, multi‑threaded

⭐⭐⭐ – data < 50 GB

Util.exportTable

Util.importTable

Native, single‑threaded export

⭐ – data < 20 GB

Avoid data loss when using mysql< import.

All first three methods support WHERE filtering; mydumper is fastest.

mysqldump and mydumper acquire FTWRL locks.

Util.dumpTables cannot perform incremental loads into existing tables.

Util.exportTable export is single‑threaded; import may need retries.

Test before production and verify data after import.

6. Giveaway Question

Beyond the methods described, what other effective data import solutions exist? Leave your answers in the comments; a selected responder will receive a community prize.

MySQLdata-importExportmysqldumpmydumperMySQL ShellSELECT OUTFILE
Aikesheng Open Source Community
Written by

Aikesheng Open Source Community

The Aikesheng Open Source Community provides stable, enterprise‑grade MySQL open‑source tools and services, releases a premium open‑source component each year (1024), and continuously operates and maintains them.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.