Master MySQL Slow Query Optimization: From Logs to Indexes
This comprehensive guide explains how to detect, analyze, and optimize MySQL slow queries by configuring the slow‑query log, using pt‑query‑digest, interpreting EXPLAIN output, designing effective B+Tree indexes, avoiding common index pitfalls, optimizing count(*) operations, improving deep pagination, rewriting inefficient SQL patterns, and applying advanced table design techniques such as partitioning and sharding.
Background and Purpose
MySQL slow queries are the most common cause of database performance problems; queries taking more than one second can degrade user experience, and queries over ten seconds often trigger complaints. This article provides a practical, end‑to‑end guide for discovering, analyzing, locating, and optimizing slow queries, aimed at DBAs and operations engineers.
Prerequisites and Environment
Assumes basic SQL knowledge, familiarity with MySQL/MariaDB commands, and experience maintaining databases. The examples target MySQL 8.0.36 (community edition), MariaDB 10.11.x, and are compatible with Percona Server 8.0.
1. Slow‑Query Log Configuration and Enabling
Check the current configuration:
# Check if the slow‑query log is enabled
mysql -e "SHOW VARIABLES LIKE 'slow_query_log%';"
mysql -e "SHOW VARIABLES LIKE 'long_query_time%';"
mysql -e "SHOW VARIABLES LIKE 'log_output%';"Temporarily enable the log:
# Enable temporarily
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL slow_query_log_file = '/var/lib/mysql/mysql-slow.log';
SET GLOBAL long_query_time = 1; # log queries > 1 s
SET GLOBAL log_queries_not_using_indexes = 'ON';Permanent configuration (my.cnf):
[mysqld]
slow_query_log = 1
slow_query_log_file = /var/lib/mysql/mysql-slow.log
long_query_time = 1
# log_queries_not_using_indexes = OFF # enable with caution in production
log_slow_admin_statements = ON
# log_output = 'TABLE' # switch to TABLE if neededConvenient Bash script ( enable_slow_query_log.sh) automates the above steps and verifies the settings.
Analysis tool: pt‑query‑digest from Percona Toolkit.
# Install
dnf install percona-toolkit -y
# Analyze the slow‑query log
pt-query-digest /var/lib/mysql/mysql-slow.log
# Show top 20 slow queries
pt-query-digest --limit 20 /var/lib/mysql/mysql-slow.log
# Time‑range analysis
pt-query-digest --since '2026-04-03 00:00:00' --until '2026-04-03 12:00:00' /var/lib/mysql/mysql-slow.log
# Filter by database
pt-query-digest --filter '$event->{db} && $event->{db} eq "mydb"' /var/lib/mysql/mysql-slow.log
# Save results
pt-query-digest /var/lib/mysql/mysql-slow.log > /tmp/query_analysis.txt
# Real‑time analysis from processlist
pt-query-digest --processlist h=localhost --interval 1 --run-time 602. Interpreting EXPLAIN Execution Plans
Basic usage:
EXPLAIN SELECT * FROM users WHERE id = 1;
EXPLAIN ANALYZE SELECT * FROM users WHERE id = 1; # MySQL 8.0.18+
EXPLAIN FORMAT=JSON SELECT * FROM users WHERE id = 1;Key output fields (converted to a list):
id : sequence number of the SELECT within the query.
select_type : type of SELECT (SIMPLE, PRIMARY, SUBQUERY, etc.).
table : table accessed.
partitions : partitions involved.
type : join type (performance‑critical).
possible_keys : indexes that could be used.
key : index actually used.
key_len : length of the used key.
ref : columns compared with the index.
rows : estimated rows examined.
filtered : percentage of rows filtered after the index.
Extra : additional information.
Typical type values (best to worst):
system – single‑row system table.
const – primary key or unique index, at most one row.
eq_ref – join using primary/unique key.
ref – non‑unique index lookup.
ref_or_null – ref with possible NULL values.
range – index range scan.
index – full index scan.
ALL – full table scan (slowest).
Typical Extra values:
Using index – covering index, no row lookup.
Using where – additional filtering after index.
Using temporary – creates a temporary table (performance hit).
Using filesort – external sort (performance hit).
Using index condition – index condition push‑down.
Using MRR – multi‑range read optimization.
2.5 Slow‑Query Analysis Script
#!/bin/bash
# script: analyze_slow_queries.sh
# Purpose: extract and analyze SQL from the slow‑query log
SLOW_LOG="/var/lib/mysql/mysql-slow.log"
REPORT_FILE="/tmp/slow_query_report_$(date +%Y%m%d).txt"
echo "=== MySQL Slow Query Analysis Report ===" > "$REPORT_FILE"
echo "Generated at: $(date)" >> "$REPORT_FILE"
if command -v pt-query-digest &>/dev/null; then
echo "[1] Top 10 slowest queries:" >> "$REPORT_FILE"
pt-query-digest --limit 10 "$SLOW_LOG" >> "$REPORT_FILE" 2>&1
echo "[2] Most frequent queries:" >> "$REPORT_FILE"
pt-query-digest --order-by Query_time:sum --limit 10 "$SLOW_LOG" >> "$REPORT_FILE" 2>&1
echo "[3] Queries without index usage:" >> "$REPORT_FILE"
pt-query-digest --filter '$event->{"Using"} =~ /No index/' "$SLOW_LOG" >> "$REPORT_FILE" 2>&1
else
echo "pt-query-digest not installed, falling back to mysqldumpslow" >> "$REPORT_FILE"
mysqldumpslow -t 10 "$SLOW_LOG" >> "$REPORT_FILE" 2>&1
fi
cat "$REPORT_FILE"3. B+Tree Index Fundamentals
MySQL uses B+Tree for most indexes because it is disk‑friendly: each node fits a 16 KB disk page, tree height is typically 3–4, and a query needs only 3–4 I/O operations. Leaf nodes are linked for efficient range scans. Compared with a B‑Tree, only leaf nodes store full rows, internal nodes store keys, making the tree shallower.
Visualization example:
[50 | 100 | 200]
/ | \
[<50] [50‑99] [100‑199] [>=200]
| | | |
data1 data2 data3 data44. Index Types and Creation
4.1 Primary Key Index
# Create a table with a primary key (clustered index)
CREATE TABLE orders (
order_id BIGINT PRIMARY KEY,
user_id BIGINT,
total_amount DECIMAL(10,2),
create_time DATETIME
);
# Add a primary key to an existing table
ALTER TABLE orders ADD PRIMARY KEY (order_id);
# Composite primary key
CREATE TABLE order_items (
order_id BIGINT,
item_id BIGINT,
quantity INT,
PRIMARY KEY (order_id, item_id)
);Characteristics: one primary key per table, unique and NOT NULL, InnoDB uses it as the clustered index, and a BIGINT auto‑increment key is recommended for best performance.
4.2 Unique Index
# Unique index on email
CREATE UNIQUE INDEX idx_email ON users(email);
# Or define in table definition
CREATE TABLE users (
id INT PRIMARY KEY,
email VARCHAR(100) UNIQUE,
phone VARCHAR(20),
UNIQUE INDEX idx_phone (phone)
);Differences from primary key: UNIQUE can contain a single NULL, multiple unique indexes per table, and they are non‑clustered.
4.3 Normal Index
# Single‑column index
CREATE INDEX idx_name ON users(name);
# List all indexes
SHOW INDEX FROM users;
# Full syntax with comment
CREATE INDEX idx_status ON orders(status) USING BTREE COMMENT 'Order status index';4.4 Prefix Index
Useful for VARCHAR/TEXT columns where only the first N characters are indexed.
# Index first 10 characters of email
CREATE INDEX idx_email_prefix ON users(email(10));
# Index first 20 characters of address
CREATE INDEX idx_address_prefix ON users(address(20));
# Caveats: choose length carefully, only works with =, <, >, LIKE 'prefix%'.4.5 Composite Index
# Composite index on (user_id, status, create_time)
CREATE INDEX idx_user_status_time ON orders(user_id, status, create_time);
# Usage examples
SELECT * FROM orders WHERE user_id = 1; -- uses index
SELECT * FROM orders WHERE user_id = 1 AND status = 'paid'; -- uses index
SELECT * FROM orders WHERE status = 'paid'; -- may not use index (leftmost prefix rule)
SELECT * FROM orders WHERE user_id = 1 AND status > 'paid'; -- index used up to status, later columns not used
SELECT * FROM orders WHERE user_id = 1 AND create_time LIKE '2026-01%'; -- index used for user_id, then LIKE on prefix5. Typical Scenarios Where Indexes Fail
5.1 Functions and Arithmetic
# Bad: applying YEAR() function
SELECT * FROM orders WHERE YEAR(create_time) = 2026;
# Good: keep column independent
SELECT * FROM orders WHERE create_time >= '2026-01-01' AND create_time < '2027-01-01';
# Bad: arithmetic on indexed column
SELECT * FROM users WHERE age + 1 = 30;
# Good
SELECT * FROM users WHERE age = 29;5.2 Implicit Type Conversion
# Bad: numeric literal compared to VARCHAR column
SELECT * FROM test WHERE phone = 13800138000; -- index not used
# Good
SELECT * FROM test WHERE phone = '13800138000';
# Bad: string compared to INT column with non‑numeric suffix
SELECT * FROM test2 WHERE id = '1abc'; -- index not used5.3 LIKE Patterns
# Bad: leading wildcard
SELECT * FROM orders WHERE order_name LIKE '%test%';
# Good: trailing wildcard
SELECT * FROM orders WHERE order_name LIKE 'test%';
# Optimization: full‑text index
ALTER TABLE orders ADD FULLTEXT ft_order_name (order_name);
SELECT * FROM orders WHERE MATCH(order_name) AGAINST('test');5.4 OR Conditions
# Bad: OR without indexes
SELECT * FROM users WHERE name='John' OR email='[email protected]';
# Good: split into UNION ALL with indexes on each side
SELECT * FROM users WHERE name='John'
UNION ALL
SELECT * FROM users WHERE email='[email protected]' AND name <> 'John';
# Or use IN when appropriate
SELECT * FROM users WHERE name IN ('John','Mary','Tom');5.5 NOT Operators
# Bad: NOT IN or NOT EXISTS
SELECT * FROM orders WHERE status NOT IN ('paid','shipped');
SELECT * FROM orders WHERE status != 'paid';
# Good: use positive IN
SELECT * FROM orders WHERE status IN ('pending','cancelled');5.6 Index‑Usage Checking Script
#!/bin/bash
# script: check_index_usage.sh
# Purpose: list indexes that have never been used (requires PERFORMANCE_SCHEMA)
mysql -e "
SELECT OBJECT_SCHEMA AS database,
OBJECT_NAME AS table_name,
INDEX_NAME AS index_name,
SEQ_IN_INDEX AS index_seq,
COLUMN_NAME AS column_name
FROM information_schema.STATISTICS
WHERE OBJECT_SCHEMA = 'your_database'
ORDER BY OBJECT_NAME, INDEX_NAME, SEQ_IN_INDEX;
"
# Or use pt-index-usage on the slow‑query log
# pt-index-usage /var/lib/mysql/mysql-slow.log --user=root --password=xxx
echo "Check EXPLAIN for suspicious queries";6. Deep Dive into count(*) Optimization
6.1 count(*) vs count(1) vs count(col)
# No performance difference for count(*) , count(1) , count(primary_key)
SELECT COUNT(*) FROM orders;
SELECT COUNT(1) FROM orders;
SELECT COUNT(id) FROM orders; # id is NOT NULL primary key
# Difference appears when counting a nullable column
SELECT COUNT(col) FROM orders; # skips NULL rows6.2 Engine‑Specific Behavior
In InnoDB, count(*) requires a full index or table scan; the primary key index is the fastest because leaf nodes contain the full row.
# Fastest count(*) using primary key index
SELECT COUNT(*) FROM orders; # full scan
SELECT COUNT(*) FROM orders WHERE status='paid'; # uses status index6.3 Multi‑Count Optimization
# Inefficient: three separate scans
SELECT COUNT(*) FROM orders WHERE status='paid';
SELECT COUNT(*) FROM orders WHERE status='pending';
SELECT COUNT(*) FROM orders WHERE status='cancelled';
# Efficient: single scan with conditional aggregation
SELECT SUM(status='paid') AS paid_count,
SUM(status='pending') AS pending_count,
SUM(status='cancelled') AS cancelled_count,
COUNT(*) AS total_count
FROM orders;
# Or GROUP BY
SELECT status, COUNT(*) AS cnt FROM orders GROUP BY status;6.4 Large‑Table count(*) Tricks
# Approximate count using INFORMATION_SCHEMA
SELECT TABLE_ROWS FROM information_schema.TABLES WHERE TABLE_NAME='orders';
# Create a daily counter table for near‑real‑time counts
CREATE TABLE order_stats (
stat_date DATE PRIMARY KEY,
total_orders BIGINT DEFAULT 0,
paid_orders BIGINT DEFAULT 0,
pending_orders BIGINT DEFAULT 0
);
DELIMITER $$
CREATE EVENT e_update_order_stats
ON SCHEDULE EVERY 1 HOUR
DO BEGIN
INSERT INTO order_stats (stat_date,total_orders,paid_orders,pending_orders)
SELECT CURRENT_DATE,
COUNT(*),
SUM(status='paid'),
SUM(status='pending')
FROM orders
ON DUPLICATE KEY UPDATE
total_orders = VALUES(total_orders),
paid_orders = VALUES(paid_orders),
pending_orders= VALUES(pending_orders);
END$$
DELIMITER ;7. Pagination Optimization – The Deep‑Pagination Problem
7.1 Problem Overview
Using LIMIT offset, rows forces MySQL to read and discard all rows before the offset, which becomes extremely slow for large offsets.
7.2 Solution 1 – Primary‑Key Cursor
# First page
SELECT * FROM orders ORDER BY id LIMIT 20;
# Suppose last_id = 1000
# Next page
SELECT * FROM orders WHERE id > 1000 ORDER BY id LIMIT 20;7.3 Solution 2 – Delayed Join
# First fetch only IDs with the heavy filter
SELECT id FROM orders WHERE status='paid' ORDER BY create_time DESC LIMIT 100000,20;
# Then join back to get full rows
SELECT o.* FROM orders o JOIN (SELECT id FROM ...) t ON o.id = t.id;7.4 Solution 3 – Range Query on Auto‑Increment ID
# User jumps to page 500 (20 rows per page) and knows last seen id = 10000
SELECT * FROM orders WHERE id > 10000 ORDER BY id LIMIT 20;7.5 Solution 4 – Cache Total Count
# Show only previous/next links without exact total
SELECT * FROM orders WHERE id < 10000 ORDER BY id DESC LIMIT 20;
SELECT COUNT(*) FROM orders WHERE id < 10000; # check if more rows exist7.6 Pagination Performance Test Script
#!/bin/bash
# script: test_pagination_performance.sh
mysql -e "
SET profiling = 1;
SELECT SQL_NO_CACHE * FROM orders ORDER BY id LIMIT 20;
SELECT SQL_NO_CACHE * FROM orders ORDER BY id LIMIT 10000,20;
SELECT SQL_NO_CACHE * FROM orders ORDER BY id LIMIT 50000,20;
SELECT SQL_NO_CACHE * FROM orders ORDER BY id LIMIT 100000,20;
SHOW PROFILES;
"8. Slow‑Query Case Studies
8.1 Case 1 – Order Statistics Query
Problem SQL aggregates orders by date and user, but GROUP BY and ORDER BY fields differ and user_name is not covered by an index.
SELECT DATE(create_time) AS order_date,
COUNT(*) AS order_count,
SUM(total_amount) AS total_amount,
user_name
FROM orders
WHERE create_time >= '2026-01-01'
GROUP BY DATE(create_time), user_name
ORDER BY order_date DESC;Optimization : add an index on create_time and remove user_name from GROUP BY if not needed, or include it in the index.
ALTER TABLE orders ADD INDEX idx_create_time (create_time);
SELECT DATE(create_time) AS order_date,
COUNT(*) AS order_count,
SUM(total_amount) AS total_amount
FROM orders
WHERE create_time >= '2026-01-01'
GROUP BY DATE(create_time)
ORDER BY order_date DESC;8.2 Case 2 – User Behavior Analysis
Original query joins users, orders, and events with DISTINCT counts, leading to a heavy scan.
SELECT u.id, u.name,
COUNT(DISTINCT o.id) AS order_count,
COUNT(DISTINCT e.id) AS event_count
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
LEFT JOIN events e ON u.id = e.user_id
WHERE u.register_time >= '2026-01-01'
GROUP BY u.id, u.name;Optimization: create indexes on users.register_time, orders.user_id, and events.user_id, then rewrite using subqueries to aggregate separately.
ALTER TABLE users ADD INDEX idx_register_time (register_time);
ALTER TABLE orders ADD INDEX idx_user_id (user_id);
ALTER TABLE events ADD INDEX idx_user_id (user_id);
SELECT u.id, u.name,
COALESCE(o.order_count,0) AS order_count,
COALESCE(e.event_count,0) AS event_count
FROM users u
LEFT JOIN (
SELECT user_id, COUNT(*) AS order_count FROM orders GROUP BY user_id
) o ON u.id = o.user_id
LEFT JOIN (
SELECT user_id, COUNT(*) AS event_count FROM events GROUP BY user_id
) e ON u.id = e.user_id
WHERE u.register_time >= '2026-01-01';8.3 Case 3 – Pagination Export
Exporting 1 000 000 rows with LIMIT 1000000,20 is inefficient.
Solution 1: use primary‑key range as shown in Section 7.
Solution 2: create a temporary table of IDs and join back.
CREATE TEMPORARY TABLE temp_export_ids (id BIGINT PRIMARY KEY);
INSERT INTO temp_export_ids
SELECT id FROM orders WHERE status='completed' ORDER BY create_time DESC LIMIT 100000;
SELECT o.* FROM orders o JOIN temp_export_ids t ON o.id = t.id ORDER BY o.create_time DESC;8.4 Slow‑Query Analysis Report Script
#!/bin/bash
# script: slow_query_report.sh
DB_NAME="orders_db"
REPORT_FILE="/tmp/mysql_slow_report_$(date +%Y%m%d).txt"
echo "=== MySQL Slow Query Analysis Report ===" > "$REPORT_FILE"
echo "Database: $DB_NAME" >> "$REPORT_FILE"
echo "Generated at: $(date)" >> "$REPORT_FILE"
# 1. Summary of slow queries in the last 24h
mysql -D "$DB_NAME" -e "
SELECT COUNT(*) AS total_slow_queries,
AVG(query_time) AS avg_query_time,
MAX(query_time) AS max_query_time,
COUNT(DISTINCT db) AS affected_databases
FROM mysql.slow_log
WHERE start_time >= DATE_SUB(NOW(),INTERVAL 24 HOUR);
" >> "$REPORT_FILE" 2>&1
# 2. Top 10 slowest queries (last 7 days)
mysql -D "$DB_NAME" -e "
SELECT query_time, rows_sent, rows_examined,
LEFT(query_sql,100) AS sql_preview,
MAX(start_time) AS last_seen
FROM mysql.slow_log
WHERE start_time >= DATE_SUB(NOW(),INTERVAL 7 DAY)
GROUP BY query_sql
ORDER BY query_time DESC
LIMIT 10;
" >> "$REPORT_FILE" 2>&1
# 3. Queries that used filesort (indicative of index misuse)
mysql -D "$DB_NAME" -e "
SELECT LEFT(query_sql,100) AS sql_preview,
COUNT(*) AS exec_count,
AVG(query_time) AS avg_time
FROM mysql.slow_log
WHERE start_time >= DATE_SUB(NOW(),INTERVAL 7 DAY)
AND query_sql LIKE '%Using%filesort%'
GROUP BY LEFT(query_sql,100)
ORDER BY exec_count DESC
LIMIT 10;
" >> "$REPORT_FILE" 2>&1
cat "$REPORT_FILE"9. SQL Rewrite Techniques
9.1 Replace IN with EXISTS or JOIN
# Inefficient IN subquery
SELECT * FROM users WHERE id IN (SELECT user_id FROM orders WHERE amount > 1000);
# Equivalent JOIN (clearer and often faster)
SELECT DISTINCT u.* FROM users u INNER JOIN orders o ON u.id = o.user_id WHERE o.amount > 1000;
# EXISTS version
SELECT * FROM users u WHERE EXISTS (SELECT 1 FROM orders o WHERE o.user_id = u.id AND o.amount > 1000);9.2 Replace OR with UNION
# Bad OR
SELECT * FROM products WHERE category='electronics' OR brand='Apple';
# Better UNION (ensures each side can use its own index)
SELECT * FROM products WHERE category='electronics'
UNION
SELECT * FROM products WHERE brand='Apple' AND category <> 'electronics';
# UNION ALL if duplicates are impossible9.3 Optimize LIKE
# Bad leading wildcard
SELECT * FROM products WHERE name LIKE '%iphone%';
# Full‑text index solution
ALTER TABLE products ADD FULLTEXT ft_name (name);
SELECT * FROM products WHERE MATCH(name) AGAINST('+iphone' IN BOOLEAN MODE);
# Virtual column for prefix filtering
ALTER TABLE products ADD COLUMN name_first_char CHAR(1) GENERATED ALWAYS AS (LEFT(name,1)) STORED;
CREATE INDEX idx_name_first_char ON products(name_first_char);
SELECT * FROM products WHERE name_first_char='i' AND name LIKE 'i%iphone%';9.4 Optimize COUNT(DISTINCT)
# Multiple COUNT(DISTINCT) in one query forces many passes
SELECT COUNT(DISTINCT user_id) AS user_cnt,
COUNT(DISTINCT product_id) AS prod_cnt,
COUNT(DISTINCT category_id) AS cat_cnt
FROM orders;
# Split into independent subqueries (single pass each) or use conditional aggregation if appropriate
SELECT (SELECT COUNT(DISTINCT user_id) FROM orders) AS user_cnt,
(SELECT COUNT(DISTINCT product_id) FROM orders) AS prod_cnt,
(SELECT COUNT(DISTINCT category_id) FROM orders) AS cat_cnt;10. Table Design Optimizations
10.1 Choose Appropriate Data Types
Prefer the smallest integer type that fits the range (TINYINT, SMALLINT, INT, BIGINT).
Use DECIMAL for exact monetary values; avoid FLOAT/DOUBLE for financial data.
CHAR for fixed‑length strings (e.g., phone numbers), VARCHAR for variable length.
DATE for day precision, DATETIME for second precision, TIMESTAMP for automatic creation/update timestamps.
10.2 Normalization vs. Denormalization
Third Normal Form (3NF) separates entities into distinct tables, reducing redundancy. Denormalization stores redundant data (e.g., user_name in the orders table) to avoid joins in read‑heavy scenarios, at the cost of extra storage and write complexity.
10.3 Partitioned Tables
Partitioning by date can limit scans to relevant partitions.
CREATE TABLE orders (
order_id BIGINT,
user_id INT,
total_amount DECIMAL(10,2),
create_time DATETIME,
PRIMARY KEY (order_id, create_time)
) PARTITION BY RANGE (YEAR(create_time)) (
PARTITION p2024 VALUES LESS THAN (2025),
PARTITION p2025 VALUES LESS THAN (2026),
PARTITION p2026 VALUES LESS THAN (2027),
PARTITION p_future VALUES LESS THAN MAXVALUE
);
-- Query only the needed partition
SELECT * FROM orders WHERE create_time >= '2026-01-01' AND create_time < '2026-02-01';10.4 Sharding (Horizontal Scaling)
Logical sharding based on a hash of user_id (e.g., user_id % 4) distributes data across multiple physical tables or databases. Middleware such as ShardingSphere or MyCAT implements the routing. Global tables (e.g., categories) are replicated to every shard.
11. Summary – Index Usage Cheat Sheet
11.1 Index Design Principles
Create indexes on columns used in WHERE, ORDER BY, or GROUP BY.
Prefer high‑cardinality columns.
Follow the left‑most prefix rule for composite indexes.
Avoid functions or arithmetic on indexed columns.
Limit the total number of indexes to control storage and write overhead.
11.2 Slow‑Query Optimization Checklist
Enable slow‑query log and set an appropriate long_query_time.
Identify the slowest queries (pt‑query‑digest or mysqldumpslow).
Run EXPLAIN and examine type (avoid ALL) and Extra (avoid filesort/temporary).
Ensure the query uses an index; add or modify indexes as needed.
Rewrite inefficient patterns (IN, OR, LIKE, NOT).
Re‑run the query and compare execution time.
11.3 Common Commands Quick Reference
Show slow‑query settings: SHOW VARIABLES LIKE 'slow_query%'; Enable log: SET GLOBAL slow_query_log = ON; Set threshold: SET GLOBAL long_query_time = 1; Explain plan: EXPLAIN SELECT ...; Show indexes: SHOW INDEX FROM table_name; Create index: CREATE INDEX idx_name ON table(col); Drop index: DROP INDEX idx_name ON table; Show table status:
SHOW TABLE STATUS LIKE 'table_name';11.4 EXPLAIN Quick‑Read Table
type hierarchy (best → worst): const → eq_ref → ref → range → index → ALL.
Extra flags to watch: Using filesort, Using temporary (both indicate potential problems); Using index (good – covering index).
References
Versions: MySQL 8.0.36 (community), Percona Server 8.0.36, MariaDB 10.11.x, InnoDB storage engine, Rocky Linux 9.4.
Tools: Percona Toolkit (pt‑query‑digest, pt‑index‑usage), MySQL Workbench, performance_schema, sys schema.
Documentation: MySQL 8.0 Reference Manual (Optimization), "High Performance MySQL" (3rd ed.), MySQL Internals Manual.
MaGe Linux Operations
Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
