Databases 41 min read

How to Diagnose and Fix MySQL Deadlocks Without Just Restarting the Service

This article explains why MySQL deadlocks occur, shows how to enable and read deadlock logs, walks through three real‑world case studies, and provides practical SQL, configuration, and architectural solutions to resolve and prevent deadlocks in production environments.

MaGe Linux Operations
MaGe Linux Operations
MaGe Linux Operations
How to Diagnose and Fix MySQL Deadlocks Without Just Restarting the Service

Problem Background

MySQL deadlocks are a common pain point in production. Many operations engineers react to a deadlock error by restarting the database, which clears the immediate blockage but leaves the root cause untouched, so the problem reappears later.

The essence of a deadlock is circular waiting caused by inconsistent lock‑acquisition order among concurrent transactions. In InnoDB, deadlocks are not always a configuration issue or bad SQL—high‑concurrency workloads can produce deadlocks even with correct business logic, but frequent deadlocks indicate that the logic or index design has optimisation space.

This article targets MySQL 5.7 and MySQL 8.0, systematically explaining deadlock formation, troubleshooting methods, remediation ideas, and preventive measures.

Applicable Scenarios

Frequent deadlock errors in production, blocking business.

Monitoring shows increasing InnoDB Lock wait timeout exceeded errors.

Need to analyse deadlock logs to locate the root cause.

Optimise business SQL to reduce deadlock probability.

Deadlock Formation Mechanism

What Is a Deadlock

A deadlock occurs when two or more transactions hold locks the other needs, forming a circular wait that prevents any of them from proceeding.

Typical Scenarios in MySQL InnoDB

Row‑lock conflict : Transaction A locks row X, transaction B locks row Y, then A tries to lock Y while B tries to lock X.

Gap‑lock conflict : Range queries create gap locks; multiple transactions lock intersecting index ranges.

Unique‑index conflict : Concurrent inserts of the same unique key.

Lock upgrade : Row lock upgrades to page or table lock, widening the conflict scope.

InnoDB Lock Types

Shared lock (S lock) : Allows a transaction to read a row.

Exclusive lock (X lock) : Allows a transaction to update or delete a row.

Record lock : Locks an index record.

Gap lock : Locks the gap between index records to prevent phantom reads.

Next‑Key lock : Combination of record lock and gap lock.

Intention lock : Table‑level lock indicating that a transaction intends to lock rows.

Deadlock vs. Lock Wait

Lock wait (Lock wait timeout) : Transaction A waits for transaction B to release a lock; after innodb_lock_wait_timeout (default 50 s) it errors out. This is a one‑way wait, not a deadlock.

Deadlock : Transaction A waits for B, and B waits for A. MySQL’s deadlock detector automatically rolls back one transaction to break the cycle.

In MySQL 8.0 the deadlock detector is ON by default ( innodb_deadlock_detect = ON). When a deadlock occurs, MySQL rolls back the transaction with the smallest cost, usually the one holding the fewest row‑level locks.

Troubleshooting Tools: How to Obtain Deadlock Information

Step 1: Enable Deadlock Logging

# my.cnf / my.ini
[mysqld]
# Record detailed deadlock information to the error log
innodb_print_all_deadlocks = ON

# Lock‑wait timeout (seconds)
innodb_lock_wait_timeout = 50

# Deadlock detection (ON by default in 8.0, also ON in 5.7)
innodb_deadlock_detect = ON
# Reload configuration without restarting (MySQL 8.0+ supports dynamic changes)
SET GLOBAL innodb_print_all_deadlocks = ON;
SET GLOBAL innodb_lock_wait_timeout = 50;

# Verify that the settings took effect
SHOW VARIABLES LIKE 'innodb_print_all_deadlocks';
SHOW VARIABLES LIKE 'innodb_lock_wait_timeout';
SHOW VARIABLES LIKE 'innodb_deadlock_detect';

Step 2: View the Deadlock Log

# Show deadlock entries in the error log
grep -A 50 "TRANSACTION" /var/log/mysql/error.log | grep -A 30 "DEADLOCK"

# If the error‑log location is unknown
SHOW VARIABLES LIKE 'log_error';

# For MySQL 5.7+, after enabling the flag, trigger a deadlock and then view the log

Step 3: Analyse lock information in performance_schema

From MySQL 5.7 onward you can query performance_schema for detailed lock data.

# Enable lock monitoring (needs to be set in my.cnf before startup)
-- View all current locks
SELECT l.lock_id, l.lock_mode, l.lock_type, l.lock_table, l.lock_index,
       l.lock_space, l.lock_page, l.lock_rec, l.lock_data,
       t.trx_id, t.trx_state, t.trx_started, t.trx_rows_locked,
       p.ps_table_name
FROM information_schema.INNODB_LOCKS l
JOIN information_schema.INNODB_TRX t ON l.lock_trx_id = t.trx_id
LEFT JOIN performance_schema.metadata_locks p ON p.lock_object = l.lock_table;

-- Show lock‑wait graph (who is waiting for whom)
SELECT r.trx_id AS waiting_trx_id, r.trx_mysql_thread_id AS waiting_thread,
       r.trx_query AS waiting_query,
       b.trx_id AS blocking_trx_id, b.trx_mysql_thread_id AS blocking_thread,
       b.trx_query AS blocking_query, b.trx_started AS blocking_trx_started,
       b.trx_rows_locked AS blocking_trx_rows_locked
FROM information_schema.INNODB_LOCK_WAITS w
JOIN information_schema.INNODB_TRX b ON w.blocking_trx_id = b.trx_id
JOIN information_schema.INNODB_TRX r ON w.requesting_trx_id = r.trx_id;

In MySQL 8.0 the tables information_schema.INNODB_LOCKS and INNODB_LOCK_WAITS are removed; lock data must be queried from performance_schema.data_locks and performance_schema.data_lock_waits instead.

# MySQL 8.0 lock information
SELECT * FROM performance_schema.data_locks;
SELECT * FROM performance_schema.data_lock_waits;
SELECT dl.lock_id, dl.lock_mode, dl.lock_type, dl.lock_status, dl.object_name,
       dl.index_name, dl.lock_data, dt.trx_id, dt.trx_state, dt.trx_started,
       dt.trx_query
FROM performance_schema.data_locks dl
JOIN performance_schema.data_locks dt ON dl.lock_trx_id = dt.trx_id
WHERE dl.lock_type = 'RECORD' AND dt.trx_state = 'RUNNING';

Case Study 1: Row‑Lock Conflict Leading to Deadlock

Symptom

On the orders table, high‑concurrency order creation sometimes yields:

ERROR 1213 (40001): Deadlock found when trying to get lock;
try restarting transaction

The failure rate is about one per thousand orders.

Investigation Process

View deadlock records in the error log :

tail -n 200 /var/log/mysql/error.log | grep -A 50 "DEADLOCK"

Analyse the deadlock log :

*** (1) TRANSACTION:
TRANSACTION 98765, ACTIVE 2 sec inserting
... INSERT INTO orders (user_id, product_id, amount, status) VALUES (1001, 2001, 99.00, 'pending')

*** (1) HOLDS THE LOCK(S):
RECORD LOCKS ... index PRIMARY of table `shop`.`orders` lock_mode X ...

*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS ... index PRIMARY of table `shop`.`orders` lock_mode X waiting

*** (2) TRANSACTION:
TRANSACTION 98766, ACTIVE 2 sec inserting
... INSERT INTO orders (user_id, product_id, amount, status) VALUES (1001, 2002, 149.00, 'pending')

Both transactions are inserting into orders.

Each holds an exclusive lock on a record and waits for the other’s lock.

This indicates that the two INSERT statements tried to insert the same primary‑key value or collided on an auto‑increment.

Check the current state of the orders table :

SHOW CREATE TABLE orders\G
SELECT * FROM orders ORDER BY id DESC LIMIT 10;
SELECT id, user_id, created_at FROM orders ORDER BY id DESC LIMIT 20;

Inspect the rows inserted at the time of the deadlock :

SELECT id, user_id, product_id, status, created_at
FROM orders
WHERE created_at BETWEEN '2024-01-01 10:00:00' AND '2024-01-01 10:05:00'
ORDER BY id DESC;

Root Cause

The log shows that two concurrent transactions attempted to insert orders for the same user_id. Because the orders table has a user_id index, InnoDB under REPEATABLE READ creates a Next‑Key lock on that index entry, causing each transaction to wait for the other.

A more direct cause is the application‑level retry logic that re‑executes the same INSERT within a very short interval, so both attempts reach the database almost simultaneously.

Remediation

Solution 1: Application‑level distributed lock (recommended)

import redis
import uuid

def create_order(user_id, product_id, amount):
    # Use Redis to serialize orders for the same user
    lock_key = f"order:lock:{user_id}"
    lock_value = str(uuid.uuid4())
    lock_timeout = 5
    if not redis.set(lock_key, lock_value, nx=True, ex=lock_timeout):
        raise Exception("Order rate too high, please retry later")
    try:
        with db_connection.cursor() as cursor:
            sql = "INSERT INTO orders (user_id, product_id, amount, status) VALUES (%s, %s, %s, 'pending')"
            cursor.execute(sql, (user_id, product_id, amount))
            db_connection.commit()
            return cursor.lastrowid
    finally:
        # Release only if we still hold the lock
        if redis.get(lock_key) == lock_value:
            redis.delete(lock_key)

Solution 2: Add a unique constraint as a safety net

ALTER TABLE orders ADD CONSTRAINT uk_user_product UNIQUE (user_id, product_id);

INSERT INTO orders (user_id, product_id, amount, status)
VALUES (1001, 2001, 99.00, 'pending')
ON DUPLICATE KEY UPDATE amount = VALUES(amount), status = VALUES(status);

Solution 3: Lower the isolation level

# Session‑level change (requires appropriate privileges)
SET SESSION transaction_isolation = 'READ-COMMITTED';

# Global change (requires SUPER privilege)
SET GLOBAL transaction_isolation = 'READ-COMMITTED';

# Persist in my.cnf
[mysqld]
transaction-isolation = READ-COMMITTED

Solution 4: Align SQL write order

# Wrong: Transaction A locks user then order, Transaction B locks order then user → deadlock
# Correct: All transactions lock user first, then order
UPDATE users SET balance = balance-100 WHERE id=100;
UPDATE orders SET status='paid' WHERE id=1;

Verification

# 1. Confirm deadlocks no longer appear
tail -f /var/log/mysql/error.log | grep DEADLOCK

# 2. Verify no duplicate inserts
SELECT user_id, product_id, COUNT(*) AS cnt FROM orders GROUP BY user_id, product_id HAVING cnt > 1;

# 3. Check application logs for reduced deadlock frequency
# 4. Run a stress test
mysqlslap --concurrency=20 --iterations=100 \
  --query="INSERT INTO orders (user_id, product_id, amount, status) VALUES (FLOOR(1+RAND()*1000), FLOOR(1+RAND()*1000), 99.00, 'pending')" \
  --create-schema=shop

Risk Warning

Before adding a unique constraint, ensure that user_id + product_id truly must be unique; otherwise legitimate business rows will be rejected.

Lowering isolation to READ‑COMMITTED reduces isolation guarantees and may introduce non‑repeatable reads.

Application‑level distributed locks introduce a Redis dependency; if Redis is unavailable, order creation may fail unless a fallback strategy is implemented.

Deadlocks involving DELETE + INSERT are more complex because they may lock both rows and gaps.

Case Study 2: Gap‑Lock Deadlock

Symptom

On the products inventory table, three concurrent requests try to decrement stock when only one item is left. Two requests fail with a deadlock error, while one succeeds.

Investigation Process

# Examine table structure and indexes
SHOW CREATE TABLE products\G

# Look at recent read statements
SELECT * FROM performance_schema.events_statements_history
WHERE SQL_TEXT LIKE '%products%'
ORDER BY END_EVENT_ID DESC LIMIT 10;
*** (1) TRANSACTION:
TRANSACTION 123456, ACTIVE 0 sec updating
UPDATE products SET stock = stock - 1 WHERE id = 1001 AND stock > 0
RECORD LOCKS ... index PRIMARY of table `shop`.`products` lock_mode X ...

*** (1) HOLDS THE LOCK(S):
RECORD LOCKS ... index PRIMARY of table `shop`.`products` lock_mode X ...

*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS ... index PRIMARY of table `shop`.`products` lock_mode X waiting

*** (2) TRANSACTION:
TRANSACTION 123457, ACTIVE 0 sec updating
UPDATE products SET stock = stock - 1 WHERE id = 1001 AND stock > 0
RECORD LOCKS ... index PRIMARY of table `shop`.`products` lock_mode X ...

Root Cause

All three transactions execute the same UPDATE while stock = 1. Under REPEATABLE READ, the condition WHERE stock > 0 performs a current read that locks the matching row. Because each transaction reads the row before any of them has committed, they each acquire an X lock on the same record and then wait for the others, forming a deadlock.

The WHERE stock > 0 clause also creates a gap lock on the range, so when the stock is 1 the gap lock can be taken by different transactions, further increasing contention.

Remediation

Solution 1: Use optimistic locking instead of pessimistic locking

# Pessimistic (may deadlock)
UPDATE products SET stock = stock - 1 WHERE id = 1001 AND stock > 0;

# Optimistic: add a version column and check it atomically
UPDATE products SET stock = stock - 1, version = version + 1
WHERE id = 1001 AND stock > 0 AND version = 1;
def deduct_stock(product_id, quantity=1):
    """Deduct stock using optimistic locking"""
    with db_connection.cursor() as cursor:
        cursor.execute("SELECT stock FROM products WHERE id = %s FOR UPDATE", (product_id,))
        result = cursor.fetchone()
        if not result or result['stock'] < quantity:
            raise Exception("Insufficient stock")
        affected = cursor.execute(
            "UPDATE products SET stock = stock - %s WHERE id = %s AND stock >= %s",
            (quantity, product_id, quantity)
        )
        if affected == 0:
            raise Exception("Stock changed, retry")
        db_connection.commit()

Solution 2: Pre‑lock the row with SELECT … FOR UPDATE

START TRANSACTION;
SELECT stock INTO @current_stock FROM products WHERE id = 1001 FOR UPDATE;
IF @current_stock >= 1 THEN
    UPDATE products SET stock = stock - 1 WHERE id = 1001;
    COMMIT;
ELSE
    ROLLBACK;
END IF;

Solution 3: Replace UPDATE with a pre‑allocation table

# Create a reservation table
CREATE TABLE stock_reserve (
    id BIGINT AUTO_INCREMENT PRIMARY KEY,
    product_id INT NOT NULL,
    quantity INT NOT NULL DEFAULT 1,
    status ENUM('pending','confirmed','cancelled') DEFAULT 'pending',
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    INDEX idx_product_status (product_id, status),
    UNIQUE KEY uk_reserve (product_id, id)
);

-- Reserve stock
INSERT INTO stock_reserve (product_id, quantity) VALUES (1001, 1);

-- Confirm reservation and deduct real stock atomically
UPDATE stock_reserve, products
SET stock_reserve.status = 'confirmed',
    products.stock = products.stock - stock_reserve.quantity
WHERE stock_reserve.id = ?
  AND stock_reserve.product_id = products.id
  AND stock_reserve.status = 'pending';

Verification

# 1. Verify deadlock frequency drops
SELECT TIME, COUNT(*) AS deadlock_count FROM mysql.general_log
WHERE command_type = 'Query' AND argument LIKE '%Deadlock%'
GROUP BY TIME ORDER BY TIME DESC LIMIT 20;

# 2. Check stock accuracy
SELECT id, stock, version FROM products WHERE id = 1001;

# 3. Compare reservation table before/after
SELECT COUNT(*) AS total_reserve, SUM(quantity) AS total_quantity
FROM stock_reserve WHERE product_id = 1001 AND status = 'confirmed';

# 4. Run a sysbench stress test to ensure deadlocks are rare

Risk Warning

Optimistic‑lock retry loops need a maximum retry count to avoid long user‑side waits.

SELECT … FOR UPDATE holds the row lock for the whole transaction, which can significantly reduce throughput under high concurrency.

The reservation‑table approach adds extra rows and complexity; a cleanup job is required for cancelled reservations.

Case Study 3: Unique‑Index Conflict Deadlock

Symptom

During high‑concurrency user registration, multiple threads execute the same INSERT on the users table, which has unique indexes on username and email. Deadlocks appear in the error log.

Investigation Process

# Sample deadlock log
*** (1) TRANSACTION:
TRANSACTION 5555, ACTIVE 0 sec inserting
INSERT INTO users (username, email, ...) VALUES ('john_doe', '[email protected]', ...)
RECORD LOCKS ... index username of table `app`.`users` lock_mode X waiting

*** (2) TRANSACTION:
TRANSACTION 5556, ACTIVE 0 sec inserting
INSERT INTO users (username, email, ...) VALUES ('john_doe', '[email protected]', ...)
RECORD LOCKS ... index username of table `app`.`users` lock_mode X

Root Cause

Two concurrent INSERTs try to insert the same username='john_doe' (and the same email). Both transactions request an exclusive lock on the unique index entry, causing each to wait for the other and forming a deadlock. MySQL then rolls back the later transaction.

Remediation

Solution 1: Application‑level pre‑check

def register_user(username, email, password):
    existing = db.query("SELECT id FROM users WHERE username = %s OR email = %s", (username, email)).fetchone()
    if existing:
        raise ValueError("Username or email already registered")
    db.execute("INSERT INTO users (username, email, password_hash) VALUES (%s, %s, %s)",
               (username, email, hash_password(password)))
    db.commit()

Solution 2: INSERT IGNORE or ON DUPLICATE KEY UPDATE (idempotent scenarios)

# INSERT IGNORE – silently skip duplicate rows
INSERT IGNORE INTO users (username, email, password_hash) VALUES ('john_doe', '[email protected]', 'hashed_pw');
SELECT ROW_COUNT();  -- 1 = inserted, 0 = ignored

# ON DUPLICATE KEY UPDATE – update existing row on conflict
INSERT INTO users (username, email, password_hash) VALUES ('john_doe', '[email protected]', 'hashed_pw')
ON DUPLICATE KEY UPDATE email = VALUES(email), password_hash = VALUES(password_hash);

Solution 3: REPLACE (DELETE + INSERT)

# REPLACE deletes the existing row and inserts a new one
REPLACE INTO users (username, email, password_hash) VALUES ('john_doe', '[email protected]', 'hashed_pw');

Solution 4: Distributed lock around registration

def register_user_with_lock(username, email, password):
    lock_key = f"user:register:{username}"
    lock_val = str(uuid.uuid4())
    if not redis.set(lock_key, lock_val, nx=True, ex=5):
        raise Exception("Registration in progress, please retry later")
    try:
        if db.query("SELECT 1 FROM users WHERE username = %s", (username,)).fetchone():
            raise ValueError("Username already exists")
        db.execute("INSERT INTO users (username, email, password_hash) VALUES (%s, %s, %s)",
                   (username, email, hash_password(password)))
        db.commit()
    finally:
        if redis.get(lock_key) == lock_val:
            redis.delete(lock_key)

Verification

# 1. Check for duplicate usernames
SELECT username, COUNT(*) AS cnt FROM users GROUP BY username HAVING cnt > 1;

# 2. Verify unique constraints exist
SHOW CREATE TABLE users\G

# 3. Simulate concurrent registration with multiple mysql clients
# 4. Monitor deadlock counters
SHOW GLOBAL STATUS LIKE 'Innodb_deadlock%';

Risk Warning

REPLACE performs a DELETE followed by INSERT, which locks a larger range and is unsuitable for high‑concurrency scenarios.

ON DUPLICATE KEY UPDATE works in MySQL 5.7+, but any unique‑key conflict (single or composite) triggers the UPDATE path.

Application‑level pre‑check is not atomic; without the unique constraint as a fallback, race conditions can still cause duplicate rows.

Core Commands Quick Reference

# 1. View all current transactions and held locks (most common)
SELECT trx_id, trx_state, trx_started, trx_requested_lock_id, trx_weight,
       trx_mysql_thread_id, trx_query, trx_rows_locked, trx_rows_modified
FROM information_schema.INNODB_TRX;

# MySQL 8.0
SELECT * FROM performance_schema.data_locks;
SELECT * FROM performance_schema.data_lock_waits;

# 2. View current lock information
SELECT * FROM information_schema.INNODB_LOCKS;   -- MySQL 5.7
SELECT * FROM information_schema.INNODB_LOCK_WAITS;   -- MySQL 5.7

# 3. Show the SQL that is currently blocked
SELECT r.trx_id AS waiting_trx_id, r.trx_mysql_thread_id AS waiting_thread,
       r.trx_query AS waiting_query,
       b.trx_id AS blocking_trx_id, b.trx_mysql_thread_id AS blocking_thread,
       b.trx_query AS blocking_query
FROM information_schema.INNODB_LOCK_WAITS w
JOIN information_schema.INNODB_TRX b ON w.blocking_trx_id = b.trx_id
JOIN information_schema.INNODB_TRX r ON w.requesting_trx_id = r.trx_id;

# 4. Show deadlock statistics
SHOW GLOBAL STATUS LIKE 'Innodb_deadlock%';

# 5. Find transactions waiting longer than the timeout (requires performance_schema)
SELECT * FROM performance_schema.events_transactions_history
WHERE STATE = 'ROLLING BACK' AND EVENT_NAME = 'transaction'
ORDER BY END_EVENT_ID DESC LIMIT 10;

# 6. List currently running SQL (excluding sleeping threads)
SELECT p.ID AS thread_id, p.USER, p.HOST, d.COMMAND, t.trx_state, t.trx_started,
       t.trx_rows_locked,
       SUBSTRING(SQL.text, 1, 100) AS current_sql
FROM information_schema.PROCESSLIST p
LEFT JOIN information_schema.INNODB_TRX t ON p.ID = t.trx_mysql_thread_id
LEFT JOIN performance_schema.events_statements_current SQL ON p.ID = SQL.THREAD_ID
WHERE p.COMMAND != 'Sleep'
ORDER BY t.trx_started;

Deadlock Prevention Measures

SQL‑Level Prevention

Access data in a fixed order (e.g., always lock user_id before order_id) to avoid circular wait.

Reduce transaction holding time: avoid network I/O or heavy computation inside a transaction.

# Good: batch all writes in a single transaction
START TRANSACTION;
INSERT INTO order_items (order_id, product_id, quantity) VALUES (1,101,2),(1,102,1),(1,103,3);
UPDATE orders SET status='confirmed' WHERE id=1;
COMMIT;

# Bad: long‑running transaction with external calls
START TRANSACTION;
items = db.query("SELECT * FROM order_items WHERE order_id=1");
for item in items:
    result = call_external_service(item)  # slow
    if result:
        db.execute("UPDATE order_items SET status='done' WHERE id=%s", item.id);
COMMIT;

Use proper indexes; missing indexes cause full scans and larger lock ranges.

# Check if a query uses an index
EXPLAIN SELECT * FROM orders WHERE user_id = 1001 AND status = 'pending';
# If type = ALL, add an index
CREATE INDEX idx_orders_user_status ON orders(user_id, status);

Prefer primary‑key or unique‑key lookups; they lock only a single row, whereas ordinary indexes may acquire Next‑Key locks on ranges.

Configuration‑Level Prevention

[mysqld]
# Short lock‑wait timeout to avoid long‑running blocked transactions
innodb_lock_wait_timeout = 10

# Keep deadlock detection enabled (default ON)
innodb_deadlock_detect = ON

# Enable deadlock logging
innodb_print_all_deadlocks = ON

# Enable lock instrumentation for performance_schema
performance-schema-instrument = 'lock%=ON'

# Transaction isolation level (READ‑COMMITTED can reduce Next‑Key lock scope)
transaction-isolation = REPEATABLE-READ

# Table‑level locking (avoid lock upgrades to table locks)
innodb_table_locks = ON

Architecture‑Level Prevention

Use a message queue to smooth spikes: inventory deduction, balance updates, etc., are serialized by the queue, eliminating concurrent DB writes.

Application‑level distributed lock (Redis, ZooKeeper) before critical DB operations.

Read‑write splitting: route reads to replicas, writes to the primary to reduce lock contention on the primary.

Sharding hot tables (e.g., split orders by user_id) to reduce physical lock overlap.

Common Misconceptions and Pitfalls

Misconception 1: Deadlock Is Only a Database Problem

Deadlocks arise from the interaction between business logic and the database. Simply restarting MySQL or adding hardware does not fix the underlying lock‑order or contention issue; you must analyse the deadlock log to identify the conflicting SQL statements.

Misconception 2: Increasing innodb_lock_wait_timeout Eliminates Deadlocks

This parameter only delays the lock‑wait timeout error; it does not reduce the occurrence of deadlocks. Setting it too high can cause transactions to hold connections for a long time, worsening resource pressure.

Misconception 3: Switching to READ‑COMMITTED Removes All Deadlocks

Lowering the isolation level reduces the range of Next‑Key locks, but row‑level lock conflicts can still happen under any isolation level. Evaluate the impact on data consistency before changing isolation.

Misconception 4: Large Transactions Are Safe

Long‑running, large transactions hold locks for extended periods and increase the chance of deadlocks. Split heavy work into smaller transactions whenever possible.

Misconception 5: A Unique Constraint Prevents Duplicate Inserts

The unique constraint guarantees database‑level uniqueness, but concurrent application checks can still pass simultaneously, leading to duplicate attempts that are resolved by the constraint. Use the constraint as a safety net, not as the sole prevention method.

Summary

MySQL deadlock troubleshooting hinges on three pillars:

Read and interpret the deadlock log. Enable innodb_print_all_deadlocks to capture detailed logs, then focus on the HOLDS THE LOCK(S) and WAITING FOR THIS LOCK TO BE GRANTED sections to identify the conflicting resources.

Understand InnoDB’s lock mechanisms. Recognise how Next‑Key locks, gap locks, and row locks behave under REPEATABLE READ, and why range queries or unique‑index checks can cause circular waits.

Optimize both business logic and SQL. Application‑level distributed locks, message queues, optimistic locking, proper indexing, and aligned SQL ordering often yield greater reductions in deadlock frequency than pure SQL rewrites.

Never rely on a database restart to “fix” deadlocks; it merely masks the symptom. By systematically analysing logs, adjusting lock settings, and applying the remediation patterns above, you can keep deadlock rates within an acceptable range and ensure graceful handling (retry or idempotent operations) when they do occur.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

PerformanceSQLDatabasedeadlockInnoDBMySQL
MaGe Linux Operations
Written by

MaGe Linux Operations

Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.