Databases 8 min read

Controlling mysqldump Insert Batch Size with --net-buffer-length to Avoid Large Transactions

This article explains how mysqldump’s --net-buffer-length option can be used to split INSERT statements into manageable batches, defines large transactions, shows performance testing with different buffer sizes, and concludes that the default settings already avoid large transactions and provide acceptable import performance.

Aikesheng Open Source Community
Aikesheng Open Source Community
Aikesheng Open Source Community
Controlling mysqldump Insert Batch Size with --net-buffer-length to Avoid Large Transactions

Background: A user asked whether the INSERT statements generated by mysqldump could be organized so that each statement contains exactly 10 rows.

Parameter analysis: The --extended-insert option (default) creates long INSERTs with many rows, while --skip-extended-insert creates one‑row INSERTs; neither satisfies the requirement of 10‑row batches.

Consideration of "avoiding large transactions": A large transaction is defined as one that runs for a long time and touches a lot of data, causing lock contention, long rollback, master‑slave delay, and undo‑log growth. The author’s company limits each operation to fewer than 5,000 rows and a result set smaller than 2 MiB.

Are mysqldump‑generated SQL files large transactions? MySQL operates in autocommit mode, so without an explicit transaction each SQL statement is its own transaction; therefore the dump does not create large transactions under the author’s definition.

mysqldump automatically splits SQL statements according to the --net-buffer-length parameter, which defaults to 1 MiB. The maximum value is 16 MiB; increasing it can improve dump/import performance, but if the goal is to avoid large transactions the default should be kept.

Example command (adjusted to the maximum value): mysqldump --net-buffer-length=104652800 -uroot -proot -P3306 -h192.168.199.198 test t >16M.sql The tool warns that values above 16 MiB are automatically reduced to 16 MiB.

Import considerations: The server variable max_allowed_packet (default 4 MiB) must be large enough to accept the dump file. It can be increased, e.g.: set global max_allowed_packet=16*1024*1024*1024;

Test data generation: A table t was created and populated until it contained 1,048,576 rows (~284 MiB).

Test with --net-buffer-length=1M : The resulting 225 MiB dump contained 226 INSERT statements, roughly 1 MiB each.

Test with --net-buffer-length=16M : The same data produced a 225 MiB dump with only 15 INSERT statements, each about 16 MiB.

Performance test: Importing the 16 K‑buffer dump took about 11 seconds, while the 16 M‑buffer dump took about 8 seconds, demonstrating that larger buffer sizes reduce client‑server round trips and speed up import.

Conclusion: Under default settings mysqldump produces backup files that meet import requirements without creating large transactions, and the performance is satisfactory; therefore adjusting --net-buffer-length is not necessary for most scenarios.

Performance TestingMySQLdatabase backupmysqldumplarge transactionnet-buffer-length
Aikesheng Open Source Community
Written by

Aikesheng Open Source Community

The Aikesheng Open Source Community provides stable, enterprise‑grade MySQL open‑source tools and services, releases a premium open‑source component each year (1024), and continuously operates and maintains them.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.