Why MySQL Inserts Can Deadlock: Deep Dive into Locks and Isolation Levels
This article examines how MySQL's lock mechanisms and transaction isolation levels can cause deadlocks during high‑concurrency inserts, presents a real‑world case study, walks through log analysis and testing, and offers practical strategies to diagnose, prevent, and resolve such deadlocks.
Introduction
MySQL is a widely used open‑source relational database. In financial and transaction systems, data consistency and safety are critical, and lock mechanisms ensure transaction correctness. However, under high concurrency, MySQL can encounter deadlocks, where transactions block each other and degrade performance.
Background
The case originates from a Call Center 2.0 digital sub‑channel (IM) that uses a distributed architecture, caching, asynchronous processing, and MySQL 8.x for persistent storage.
Concurrent Write Deadlock Scenario
In an asynchronous insert workflow, the primary key is auto‑incremented while msgId is a distributed unique identifier with a unique index. The default isolation level is REPEATABLE_READ. During deployment the module repeatedly hit deadlocks, producing errors such as:
Lock wait timeout exceeded; try restarting transaction</code><code>org.springframework.dao.PessimisticLockingFailureException: could not execute statement [Lock wait timeout exceeded; try restarting transaction]Log analysis also revealed prior DuplicateKeyException on the unique index.
Problem Diagnosis
Running SHOW ENGINE INNODB STATUS showed a next‑key lock (X‑type) on the primary key range, which blocked other inserts whose msgId fell within the same gap. The relevant excerpt:
*** (1) TRANSACTION: 749452, ACTIVE 0 sec inserting</code><code>... LOCK WAIT 6 lock struct(s), heap size 1136, 3 row lock(s)</code><code>... RECORD LOCKS space id 154 page no 873 ... index PRIMARY ... lock_mode X</code><code>*** (1) WAITING FOR THIS LOCK TO BE GRANTED: ... lock_mode X insert intention waitingInnoDB Lock Types
Shared lock (S) : allows multiple transactions to read the same row.
Exclusive lock (X) : transaction has exclusive rights to update or delete the row.
Intention locks (IS/IX) : indicate a transaction’s intention to set shared or exclusive locks on rows.
Record lock : locks an index record.
Gap lock : locks the gap between index records.
Next‑key lock : combines record and gap locks.
Insert intention lock : a gap lock set before an insert to allow concurrent inserts in different positions.
Auto‑increment lock : ensures sequential primary‑key generation.
Attempted Isolation Level Change
Switching to READ_COMMITTED did not eliminate the deadlock, although the lock range was smaller than under REPEATABLE_READ.
Reproducing the Issue
A test table test(id INT AUTO_INCREMENT PRIMARY KEY, name INT UNIQUE) was created. After inserting three rows (100, 200, 300), a transaction attempted to insert a duplicate name=200, causing a Duplicate entry error. Subsequent concurrent inserts (99, 199, 299) blocked, while an insert of name=399 succeeded under READ_COMMITTED.
CREATE TABLE test( id INT NOT NULL AUTO_INCREMENT, name INT NOT NULL, PRIMARY KEY (id), UNIQUE KEY index_name (name) USING BTREE ) ENGINE=InnoDB AUTO_INCREMENT=1; BEGIN; INSERT INTO test(name) VALUES (200); -- Duplicate entry BEGIN; INSERT INTO test(name) VALUES (99); -- blocked BEGIN; INSERT INTO test(name) VALUES (399); -- succeeds under RCAnalysis
When a duplicate key occurs, InnoDB places a shared lock on the existing index record. Under REPEATABLE_READ this becomes a next‑key lock, which blocks the entire key range; under READ_COMMITTED the blocked range is narrower. Insert intention locks prevent unnecessary blocking unless a duplicate key forces an upgrade to a shared lock, which can then clash with other transactions and cause deadlock.
Solution
Key recommendations include:
Use SHOW ENGINE INNODB STATUS to view recent deadlock logs.
Enable innodb_print_all_deadlocks to log every deadlock.
Keep transactions short and commit promptly.
Prefer lower isolation levels (e.g., READ_COMMITTED) for lock‑intensive reads.
Order all table modifications consistently within a transaction.
Add appropriate indexes to reduce lock scope.
Avoid duplicate‑key inserts; use INSERT IGNORE or design the application (e.g., Kafka partitioning) to ensure a single writer per key.
Understanding MySQL’s locking and isolation behavior is essential for building high‑concurrency, high‑consistency systems.
Conclusion
Even simple INSERT statements can deadlock when duplicate keys trigger lock upgrades. Proper isolation level choice, transaction design, and proactive deadlock monitoring are necessary to prevent performance degradation in demanding environments.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
