Impact of Index Count on MySQL Write Performance: Insert, Update, Delete, and Load Data Experiments
This article experimentally demonstrates how increasing the number of secondary indexes on MySQL tables progressively degrades write operations—including INSERT, UPDATE, DELETE, and LOAD DATA—by measuring execution times across ten tables with one to ten indexes each.
When discussing indexes, most people focus on their query‑speed benefits, but indexes also impose overhead on write operations because the database must maintain index structures, causing page splits and merges.
The article investigates how the number of indexes affects write efficiency using simple experiments on a MySQL single‑instance sandbox running on a virtual machine.
Ten tables (t1‑t10) are created from a base table t0; each table receives a different number of secondary indexes (t1 has 1, t2 has 2, …, t10 has 10). The base table schema is:
(debian-ytt1:3500)|(ytt)>show create table t0\G
*************************** 1. row ***************************
Table: t0
Create Table: CREATE TABLE `t0` (
`id` int unsigned NOT NULL AUTO_INCREMENT,
`r0` int DEFAULT NULL,
`r1` int DEFAULT NULL,
`r2` int DEFAULT NULL,
`r3` int DEFAULT NULL,
`r4` int DEFAULT NULL,
`r5` int DEFAULT NULL,
`r6` int DEFAULT NULL,
`r7` int DEFAULT NULL,
`r8` int DEFAULT NULL,
`r9` int DEFAULT NULL,
`r10` int DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
1 row in set (0.00 sec)The tables and their indexes are created with the following shell loops:
root@debian-ytt1:~# for i in `seq 1 10`;do mysql --login-path=root_ytt -e "use ytt;create table t$i like t0";done;
root@debian-ytt1:~# for i in `seq 1 10`; do for j in `seq 1 "$i"`; do mysql --login-path=root_ytt -e "use ytt;alter table t$i add key idx_r$j (r$j)"; done; done;A stored procedure sp_batch_write is defined to perform bulk INSERT, UPDATE, or DELETE operations on a given table:
DELIMITER $$
USE `ytt`$$
DROP PROCEDURE IF EXISTS `sp_batch_write`$$
CREATE DEFINER=`root`@`%` PROCEDURE `sp_batch_write`(
IN f_write ENUM('insert','update','delete'),
IN f_table_name VARCHAR(64),
IN f_num INT UNSIGNED
)
BEGIN
DECLARE i INT UNSIGNED DEFAULT 0;
IF f_write = 'insert' THEN
SET @stmt = CONCAT('insert into ',f_table_name,'(r0,r1,r2,r3,r4,r5,r6,r7,r8,r9,r10) values (ceil(rand()*10000),... )');
SET @@autocommit=0;
WHILE i < f_num DO
PREPARE s1 FROM @stmt;
EXECUTE s1;
IF MOD(i,50) = 0 THEN COMMIT; END IF;
SET i = i + 1;
END WHILE;
DROP PREPARE s1;
COMMIT;
SET @@autocommit=1;
ELSEIF f_write = 'update' THEN
SET @stmt = CONCAT('update ',f_table_name,' set r0=ceil(rand()*10000),...');
PREPARE s1 FROM @stmt;
EXECUTE s1;
DROP PREPARE s1;
ELSEIF f_write = 'delete' THEN
SET @stmt = CONCAT('delete from ',f_table_name);
PREPARE s1 FROM @stmt;
EXECUTE s1;
DROP PREPARE s1;
END IF;
END$$
DELIMITER ;INSERT test: Calling the procedure with 100,000 rows shows execution time increasing with index count, from 1 min 24 sec for t1 up to 2 min 32 sec for t10. The trend is visualized in a chart.
UPDATE test: Update operations are more sensitive to index count; t1 completes in 22.8 sec, while t10 takes 4 min 43 sec, again showing a clear upward trend.
DELETE test: Disk file sizes grow with the number of indexes (t1 ≈ 28 MB, t10 ≈ 72 MB). Correspondingly, delete times rise from 31.8 sec (t1) to 4 min 22 sec (t10). The results match expectations, illustrated by another chart.
The article notes that deleting rows is slower than truncating a table because each row is merely marked and later purged, so TRUNCATE is preferred for clearing a table.
LOAD DATA test: Bulk loading 100,000 rows also becomes slower as indexes increase, ranging from 11 sec (t1) to 45.8 sec (t10), with a supporting chart.
Conclusion: Each additional secondary index adds measurable write overhead. In write‑heavy workloads, only essential indexes should be kept; unnecessary indexes should be removed. The article discusses a typical master‑slave scenario where the master may have fewer indexes for faster writes, while the slave can afford more indexes for read performance, provided replication lag is acceptable.
Aikesheng Open Source Community
The Aikesheng Open Source Community provides stable, enterprise‑grade MySQL open‑source tools and services, releases a premium open‑source component each year (1024), and continuously operates and maintains them.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.