Master MySQL Performance: From Slow Queries to Lightning‑Fast Databases
This guide walks you through identifying MySQL bottlenecks, using EXPLAIN, optimizing indexes, rewriting queries, tuning configuration, applying architectural patterns, and learning from real‑world cases to dramatically speed up your database.
Preface: Why is your database slow?
I once struggled with execution plans and index usage; this article takes you from pain points to a systematic explanation of MySQL performance optimization and execution‑plan analysis, enabling beginners to quickly boost business speed and stabilize the database.
1. Performance Bottleneck Identification Process
Collect slow query log Enable in the MySQL configuration file
my.cnf:
<code>[mysqld]
slow_query_log = ON
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 1
log_queries_not_using_indexes = ON</code>Monitor metrics Watch TPS (transactions per second), QPS (queries per second), InnoDB buffer‑pool hit rate, disk I/O, and CPU utilization.
Stress‑test comparison Use
sysbenchor custom scripts, ranging from low to high concurrency, recording response time and error rate to pinpoint whether the bottleneck lies in the database or the business layer.
Beginner tip: Locate the problem before optimizing; blind parameter changes may appear faster but won’t solve the root cause.
2. Detailed EXPLAIN Usage
Run the query with
EXPLAINprefixed:
<code>EXPLAIN SELECT u.id, u.name, o.order_date
FROM users u
JOIN orders o ON u.id = o.user_id
WHERE u.status = 'active'
ORDER BY o.order_date DESC;</code>Output field description:
Column
Meaning
id
SELECT query identifier; larger values are executed earlier
select_type
Query type (SIMPLE, PRIMARY, SUBQUERY, etc.)
table
Table being accessed
type
Join type (ALL, index, range, ref, eq_ref, const, system, NULL); ALL means full table scan and should be avoided
possible_keys
List of usable indexes
key
Index actually used
rows
Estimated number of rows scanned
Extra
Additional info (Using where, Using filesort, Using temporary)
Note: type = ALL indicates a full table scan, and Using filesort means sorting is done externally, both requiring index optimization.
3. Index Optimization Strategies
Choose appropriate columns for indexes : high‑cardinality columns involved in filtering or sorting.
Covering index : let the index contain all queried fields to avoid table lookups.
<code>CREATE INDEX idx_user_status_date ON orders (user_id, order_date);
-- Query now hits only the index, even with large data sets
SELECT user_id, order_date FROM orders WHERE user_id = 123 ORDER BY order_date;</code>Composite index order : follow the left‑most prefix rule; columns in the WHERE clause should appear first in the index.
Avoid redundant or ineffective indexes : do not index columns with many NULLs or low cardinality; avoid indexes on
function(col)which become unusable.
Beginner example: A table orders(user_id INDEX, order_date INDEX) with two single‑column indexes requires a separate sort after filtering by user_id . A composite index on (user_id, order_date) can satisfy both filtering and ordering directly.
4. Query Rewrite Techniques
Split complex subqueries : transform subqueries into JOINs or temporary tables.
Use LIMIT to restrict scan volume : always add appropriate indexes for pagination queries.
Adjust JOIN order : let small tables drive large tables.
Avoid SELECT * : fetch only required columns to reduce network traffic.
Example: Original query using IN subquery:
<code>SELECT * FROM article WHERE id IN (SELECT article_id FROM tag_map WHERE tag = '数据库');</code>Rewritten as a JOIN:
<code>SELECT a.id, a.title FROM article a
JOIN tag_map t ON a.id = t.article_id
WHERE t.tag = '数据库';</code>5. Configuration Parameter Tuning
Parameter
Recommended Setting
Explanation
innodb_buffer_pool_size
60%–80% of server memory
Larger pool yields higher hit rate.
innodb_log_file_size
1G–4G
Impacts transaction commit performance.
query_cache_type
OFF
Removed in 8.0; in 5.7 disabling avoids frequent flushes that degrade performance.
tmp_table_size / max_heap_table_size
256M–512M
When temporary tables overflow to disk they become slow.
Note: Modify parameters cautiously and validate changes in a test environment before applying to production.
6. Architectural Level Optimization
Read‑write separation : master for writes, slaves for reads; implement with ProxySQL, MyCAT, etc.
Sharding (分库分表) : split data by user, time, or other dimensions according to business scale.
Caching strategy : use Redis or Memcached to cache hot data and reduce frequent database queries.
Think about it: If a table receives millions of new rows daily, read‑write separation alone won’t solve write slowdown; consider partitioning or sharding.
7. Real‑World Case Analysis
An e‑commerce platform’s order query was slow; EXPLAIN showed
Using filesort. Adding a composite index boosted QPS by 40%.
A log table exceeded ten million rows; after converting to a partitioned table and daily archiving, query response dropped from 5 s to 100 ms.
Cache‑penetration issue solved with a Bloom filter; cache hit rate rose to 95%.
8. Common Pitfalls and Troubleshooting
Misconception: Adding an index always makes queries faster?
Indexes also incur maintenance cost; too many indexes can degrade write performance.
Misconception: Temporary tables are never slow?
Physical temporary tables write to disk; enable larger in‑memory temporary tables or optimize the SQL.
Troubleshooting approach: Analyze CPU, memory, and I/O together using
vmstat,
iostat, and
top.
Conclusion
Applying the above strategies can turn a "turtle‑speed" database into a "rocket" one; stay tuned for more distributed‑scenario optimization insights.
IT Xianyu
We share common IT technologies (Java, Web, SQL, etc.) and practical applications of emerging software development techniques. New articles are posted daily. Follow IT Xianyu to stay ahead in tech. The IT Xianyu series is being regularly updated.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.