Understanding How SET unique_checks=0 Affects InnoDB Unique Index Enforcement
This article analyzes why using SET unique_checks=0 during large MySQL imports can bypass unique‑key checks when data is not cached in the InnoDB buffer pool, explains the underlying change‑buffer mechanism, demonstrates the behavior with sysbench tests, and offers a safe workaround.
During a recent troubleshooting session with a five‑node MySQL InnoDB cluster, it was discovered that importing data via mysqlshell importTable() sets SET unique_checks=0 , which under certain conditions can cause unique‑key constraints to be ignored, allowing duplicate rows on the primary server while secondary nodes reject them.
Principle of unique_checks=0 (Does It Disable Unique‑Key Enforcement?)
The official MySQL documentation states that disabling uniqueness checks can speed up large imports because InnoDB can use its change buffer to batch secondary index writes, but it also warns that the data must contain no duplicate keys.
For large imports, SET unique_checks=0 reduces disk I/O by allowing the change buffer to handle secondary index records without reading the data into the buffer pool. However, this optimization only applies to non‑unique secondary indexes; unique indexes still require a disk read to guarantee uniqueness.
Surprisingly, when unique_checks=0 is set, the change buffer appears to become effective for unique indexes as well, leading to the hypothesis that uniqueness checks are performed only if the relevant rows reside in the InnoDB buffer pool.
If the rows being inserted are already cached in the buffer pool, InnoDB can perform the uniqueness check in memory, and duplicate inserts are still rejected.
If the rows are not in the buffer pool, InnoDB skips the disk read, writes through the change buffer, and duplicate keys may be inserted.
Verification
1. Generate data with sysbench
sysbench /usr/local/share/sysbench/oltp_read_write.lua --mysql-host=127.0.0.1 \
--mysql-port=3309 --mysql-user=root --mysql-password=root \
--mysql-db=sbtest --tables=4 --table-size=1000000 --report-interval=3 --threads=8 --time=300 prepare2. Create a test table and import some rows
CREATE TABLE `t` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`a` int(11) NOT NULL,
`k` int(11) NOT NULL DEFAULT '0',
`c` char(120) COLLATE utf8mb4_bin NOT NULL DEFAULT '',
`pad` char(60) COLLATE utf8mb4_bin NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
KEY `k_1` (`k`),
UNIQUE KEY `a` (`a`)
);
INSERT INTO t(a,k,c,pad) SELECT * FROM sbtest1 WHERE id <= 5000;3. Restart mysqld with a tiny buffer pool to ensure data is not cached
innodb_buffer_pool_size = 64M
innodb_buffer_pool_load_at_startup = 0
innodb_buffer_pool_dump_at_shutdown = 0
innodb_buffer_pool_dump_pct = 04. Insert duplicate data with unique_checks=0
SET SESSION unique_checks=0;
INSERT INTO t(a,k,c,pad) VALUES(1,439502,'59488447928-69619891934-66520244141-26804756589-33623161608-43445073345-17979281186-83118017446-98119924781-27868548476','0000');The duplicate row is inserted successfully, and a subsequent query shows multiple rows with the same a value.
SELECT id,a FROM t WHERE a<2;
+------+---+
| id | a |
+------+---+
| 1 | 1 |
| 5022 | 1 |
| 5026 | 1 |
| 5035 | 1 |
+------+---+Running sysbench run repeatedly fills the buffer pool, allowing the same duplicate insert to succeed many times.
5. Observations
Some records still fail to insert, especially when the unique key is of CHAR type, indicating that the behavior depends on the index type.
CREATE TABLE `t1` (
`id` int NOT NULL AUTO_INCREMENT,
`a` char(10) COLLATE utf8mb4_bin DEFAULT NULL,
`k` int NOT NULL DEFAULT '0',
`c` char(120) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NOT NULL DEFAULT '',
`pad` char(60) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
UNIQUE KEY `a` (`a`),
KEY `k_1` (`k`)
) ENGINE=InnoDB;Conclusion
If the rows being inserted are cached in the InnoDB buffer pool, InnoDB can still enforce uniqueness even with unique_checks=0 . When the rows are not cached, InnoDB skips the disk read, uses the change buffer for the write, and duplicate keys may be inserted, effectively weakening the unique‑key guarantee.
Note that mysqldump automatically includes SET unique_checks=0 at the beginning of its output, and mysqlshell importTable() does the same.
A bug report has been filed at https://bugs.mysql.com/bug.php?id=106121 .
Solution
To ensure uniqueness when importing, disable the change buffer before running mysqlshell importTable() :
SET GLOBAL innodb_change_buffering=NONE;Alternatively, manually comment out or remove SET unique_checks=0 from SQL dump files before importing.
Aikesheng Open Source Community
The Aikesheng Open Source Community provides stable, enterprise‑grade MySQL open‑source tools and services, releases a premium open‑source component each year (1024), and continuously operates and maintains them.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.