Boost MySQL Performance: Proven Tuning, Indexing, and Scaling Strategies
This guide presents practical MySQL optimization techniques—including SQL and index refinement, InnoDB and connection parameter tuning, cache layer integration, and architectural scaling with read‑write splitting and sharding—to dramatically increase query throughput and reduce latency.
Optimize SQL and Indexes (Make Each Query Light)
Reduce data scans, sorting, temporary tables, and row locks per request to multiply QPS. Enable slow_query_log and set long_query_time to 0.5–1 s, then use mysqldumpslow or pt-query-digest to identify the most expensive and frequent statements for focused optimization.
Index design: create composite indexes covering common WHERE / JOIN / ORDER BY / GROUP BY patterns, aiming for covering indexes. Avoid low‑selectivity single‑column indexes (e.g., status flags) and keep the number of high‑value indexes per table under five.
Rewrite queries: avoid SELECT *, select only needed columns, and break complex multi‑table joins into simpler queries with application‑level aggregation or intentional redundancy to shift CPU load from the DB.
Avoid broad fuzzy searches ( %xxx) and unindexed sorts that trigger Using temporary or Using filesort.
Control large transactions and locks: batch write operations (bulk UPDATE / INSERT) to reduce commit frequency and lock contention, and offload lengthy logic to asynchronous tasks or queues to keep online transactions short.
Tune InnoDB and Connection Parameters (Approach Single‑Machine Limits)
On a typical 8‑core, 32 GB + SSD server, allocate 60%–75% of physical memory to innodb_buffer_pool_size (up to 70%–80% for dedicated DB servers) to keep hot data in memory.
Set innodb_buffer_pool_instances to 8–16 to reduce mutex contention.
Increase innodb_log_file_size to 1–2 GB per file to lower checkpoint frequency and improve write throughput.
Balance durability and latency by adjusting innodb_flush_log_at_trx_commit (value 2 is common for high‑concurrency writes).
Introduce a Cache Layer for Peak Shaving
Most workloads achieve "ten‑thousand QPS with ease" by adding a Redis or Memcached cache to separate hot and cold data.
Read cache: store frequently accessed items such as product details, configurations, and leaderboards in Redis/Memcached; on a cache miss, query the DB and back‑fill the cache. Prioritize caching the highest‑QPS, cache‑able endpoints, often reducing DB QPS to one‑third or less.
Write‑through/refresh strategies: after a DB write, synchronously or asynchronously invalidate or update the cache to maintain eventual consistency.
Guard against cache stampede, penetration, and avalanche by using mutex locks for hot keys, pre‑warming, sensible expiration, and graceful degradation.
Architectural Scaling: Read‑Write Splitting, Sharding, and Horizontal Expansion
When a single node plus cache and SQL tuning still hit bottlenecks, employ architectural techniques to distribute load.
Read‑write splitting: designate a primary instance for writes and one or more replicas for reads, using ProxySQL, MySQL Router, or custom middleware for routing. In read‑heavy scenarios, adding read replicas can spread millions of QPS; monitor replica lag and choose appropriate consistency (read‑from‑master or read‑from‑replica).
Sharding (horizontal partitioning): when tables exceed tens of millions or billions of rows, split them by user ID or business ID (e.g., user_id % 16) and use middleware such as ShardingSphere for transparent routing. QPS scales roughly linearly with shard count—e.g., a cloud provider achieved 800 k QPS with 32 shards and can reach millions by adding more shards.
These combined techniques—SQL/index optimization, InnoDB tuning, caching, and scalable architecture—enable MySQL deployments to handle high QPS workloads with reduced latency and improved stability.
Architect Chen
Sharing over a decade of architecture experience from Baidu, Alibaba, and Tencent.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
