Databases 9 min read

How to Effectively Monitor and Optimize Slow SQL in Real Projects

The article explains a systematic interview‑ready approach to handling slow SQL: collect queries via Druid, MySQL slow‑query log or cloud exporters, prioritize by total execution time, analyze EXPLAIN output (type, key, rows, filtered, extra), and apply index, FORCE INDEX, join‑order, or off‑loading techniques for optimization.

Senior Tony
Senior Tony
Senior Tony
How to Effectively Monitor and Optimize Slow SQL in Real Projects

In recent technical interviews, candidates often enumerate various SQL‑optimization cases, but interviewers actually expect a systematic approach: monitor slow SQL in the project, analyze the execution plan, and then optimize based on concrete metrics.

How to collect slow SQL

Three common ways are described.

Using Druid connection pool – add druid-spring-boot-starter dependency, enable stat and wall filters in application.yml, and expose the Druid monitoring page (e.g., /druid/sql.html). The monitor records queries exceeding the slow-sql-millis threshold and provides a visual panel with execution count, max latency, and ranking.

MySQL native slow‑query log – set slow_query_log=ON, slow_query_log_file=/var/log/mysql/slow.log, and long_query_time=1 (seconds) in my.cnf. After restart, MySQL logs each slow query; the mysqldumpslow tool can aggregate and analyze the log.

Cloud‑provider exporters – most public clouds (Alibaba Cloud, Tencent Cloud, etc.) offer built‑in exporters that collect slow‑SQL metrics and expose them to monitoring platforms such as Prometheus.

Which slow SQL to prioritize

Instead of picking the query with the longest single execution time, calculate total_time = execution_time × execution_count. The query with the highest total_time has the greatest impact on the database.

Analyzing the execution plan

Run EXPLAIN on the slow query and pay attention to the following columns:

type – access method ranking from best to worst:

system > const > eq_ref > ref > range > index > ALL

. Queries with index or ALL usually need optimization.

key / possible_keys – the index actually used versus the indexes that could be used.

rows – estimated number of rows to scan.

filtered – percentage of rows that pass the filter; a higher value indicates a more effective index.

Extra – presence of Using temporary or Using filesort signals that optimization is required.

Optimization techniques

Most slow‑SQL problems (≈70 %) can be solved by adding appropriate indexes, especially composite indexes that follow the left‑most prefix rule. When choosing columns for a composite index, balance selectivity and query frequency; sometimes the less selective column should lead if it appears in far more queries.

In write‑heavy workloads, avoid creating too many composite indexes because each index adds write overhead. Use FORCE INDEX to compel the optimizer to use a specific index when it chooses a sub‑optimal one, and remember that the index must appear in possible_keys to be effective.

For complex cases, consider query‑level hints such as STRAIGHT_JOIN to set the join order, or choose between IN and EXISTS based on the size of the inner versus outer result sets.

If MySQL cannot achieve the required performance, offload analytics to specialized stores such as Elasticsearch, ClickHouse, Doris, or Kylin.

Druid monitoring UI
Druid monitoring UI
Cloud exporter diagram
Cloud exporter diagram
EXPLAIN output example
EXPLAIN output example
PerformanceOptimizationSQLMySQLIndexes
Senior Tony
Written by

Senior Tony

Former senior tech manager at Meituan, ex‑tech director at New Oriental, with experience at JD.com and Qunar; specializes in Java interview coaching and regularly shares hardcore technical content. Runs a video channel of the same name.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.