Databases 9 min read

How to Diagnose and Optimize Slow MySQL Queries: Practical Steps and Tips

This guide explains why slow SQL hurts performance, prioritizes remediation, details MySQL's execution phases, shows how to enable and read slow‑query logs, provides a real‑world analysis example, and lists concrete optimization techniques to make queries run faster.

dbaplus Community
dbaplus Community
dbaplus Community
How to Diagnose and Optimize Slow MySQL Queries: Practical Steps and Tips

Why Manage Slow SQL?

Any SQL statement consumes I/O resources; the longer it runs, the longer those resources are held. If a system must process 100 queries per second but 10 of them take >5 seconds, throughput drops to 90 queries/second and the remaining queries queue, directly impacting business.

Prioritization of Fixes

Master → Slave (read‑write separation): reads run on the slave, writes on the master. Heavy write load on the master can increase replication lag on the slave.

Fix the SQL statements that are executed most frequently first.

Address tables that experience high concurrent access before others.

MySQL Execution Principle

The execution of a statement consists of two main phases:

Parsing : lexical analysis → syntax analysis → logical plan → query optimization → physical execution plan. During this stage MySQL checks the cache; if no cache is usable, it proceeds to mysql_execute_command.

Execution : verify user and table permissions → acquire shared read lock on the table → fetch data into the query cache → release the shared lock.

MySQL execution flow
MySQL execution flow

How to Detect Slow Queries

Set the session‑level threshold (e.g., 1 second) and enable the slow‑query log:

set long_query_time=1;  -- session only
set global slow_query_log='ON';  -- enable globally
show variables like "%slow%";  -- verify settings
show status like "%slow%";  -- view current status

Alternatively, edit my.ini (under [mysqld]) to persist the settings:

slow_query_log = on
slow_query_log_file = D:/mysql5.5.16/data/slow.log
long_query_time = 2
log-queries-not-using-indexes

Enabling the log incurs CPU and I/O overhead, so it is advisable to turn it on intermittently for monitoring.

Slow‑Query Analysis Example

Given the statement:

SELECT * FROM `emp` WHERE ename LIKE '%mQspyv%';

It runs for 1.163 seconds, exceeding a 1‑second threshold. The slow‑query log entry looks like:

# Time: 150530 15:30:58
# User@Host: root[root] @ localhost [127.0.0.1]
# Query_time: 1.134065  Lock_time: 0.000000  Rows_sent: 8  Rows_examined: 4000000

When many slow queries exist, manual analysis is impractical. MySQL provides mysqldumpslow to summarize logs, e.g.:

mysqldumpslow -s c -t 10 /var/run/mysqld/mysqld-slow.log   # top 10 most frequent
mysqldumpslow -s t -t 3 /var/run/mysqld/mysqld-slow.log    # three slowest queries
mysqldumpslow -s t -t 10 -g "left join" /var/run/mysqld/mysqld-slow.log   # top 10 containing "left join"
mysqldumpslow -s r -t 10 -g 'left join' /var/run/mysqld/mysqldslow.log   # top 10 by rows examined

Common SQL Optimizations

Avoid subqueries : rewrite as joins when possible. In MySQL 5.5 subqueries cause a full scan of the outer table; MySQL 5.6 improves this, but joins remain more efficient.

Do not wrap indexed columns in functions : WHERE YEAR(d) >= 2016 disables the index; rewrite as WHERE d >= '2016-01-01'.

Replace multiple OR conditions with IN : WHERE LOC_ID=10 OR LOC_ID=20 OR LOC_ID=30 uses three index lookups; WHERE LOC_ID IN (10,20,30) uses one.

Use left‑anchored LIKE instead of leading wildcards: WHERE name LIKE 'de%' can use an index, whereas LIKE '%de%' cannot.

Limit result sets with LIMIT M,N to avoid scanning unnecessary rows.

Match data types : compare numeric columns to numbers, not strings (e.g., WHERE id = 19 instead of WHERE id = '19').

Suppress sorting in GROUP BY when order is irrelevant: add ORDER BY NULL to avoid extra sort work.

Remove unnecessary ORDER BY clauses that add overhead without affecting result order.

Conclusion

After identifying slow queries, apply the following measures: simplify SQL, eliminate superfluous clauses, ensure indexes are used, split large tables (keep <10 million rows per table), adopt read‑write separation for database bottlenecks, and, if needed, scale out by adding more database nodes.

MySQLSlow Querymysqldumpslow
dbaplus Community
Written by

dbaplus Community

Enterprise-level professional community for Database, BigData, and AIOps. Daily original articles, weekly online tech talks, monthly offline salons, and quarterly XCOPS&DAMS conferences—delivered by industry experts.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.