Databases 15 min read

MySQL Performance Optimization: Indexing, Query Tuning, and Best Practices

This article explains common MySQL performance problems such as missing indexes and lack of pagination, and provides practical guidance on data size limits, concurrency settings, query time targets, table design, index types, and a variety of SQL optimizations with concrete code examples.

Laravel Tech Community
Laravel Tech Community
Laravel Tech Community
MySQL Performance Optimization: Indexing, Query Tuning, and Best Practices

MySQL Performance

The author reports frequent slow‑SQL alerts on an Alibaba Cloud MySQL instance, with the longest query taking up to five minutes, caused mainly by missing indexes and missing pagination.

① Maximum Data Volume

MySQL does not impose a hard limit on rows per table; the limit is bound by the operating system's file‑size constraints. The Alibaba Java Development Manual suggests splitting tables when rows exceed 5 million or size exceeds 2 GB, though the actual limit depends on hardware, configuration, and schema design.

Example of a fast pagination query on a table with hundreds of millions of rows:

select field_1,field_2 from table where id < #{prePageMinId} order by id desc limit 20

prePageMinId is the smallest ID of the previous page. As data grows, this pattern should be replaced by more robust pagination or sharding.

② Maximum Concurrency

Concurrency is limited by max_connections (max 16384) and max_user_connections . Setting these too high can exhaust memory; too low under‑utilises hardware. A healthy ratio is >10%:

max_used_connections / max_connections * 100% = 3/100 * 100% ≈ 3%

Check current values:

show variables like '%max_connections%';
show variables like '%max_user_connections%';

Adjust in my.cnf :

[mysqld]
max_connections = 100
max_used_connections = 20

③ Query Time Target

Keep single‑query latency under 0.5 seconds, which corresponds to the user‑experience rule that a response should be within 3 seconds. Response time = UI rendering + network latency + application processing + DB query; 0.5 s reserves roughly one‑sixth for the DB.

④ Implementation Principles

MySQL is more fragile than NoSQL; therefore, let the application do more work and the database do less. Prefer hardware upgrades, data archiving, or sharding only when necessary.

Use indexes wisely; they consume disk and CPU.

Avoid database functions for formatting; handle it in the application.

Prefer application‑level data integrity over foreign‑key constraints.

For write‑heavy workloads, avoid unique indexes and enforce uniqueness in code.

Consider redundant fields or intermediate tables to trade space for speed.

Break large transactions into smaller ones.

Forecast growth of critical tables (e.g., orders) and optimise early.

Table Design

① Data Types

Select the smallest suitable type: tinyint , smallint , mediumint instead of int ; use char when length is fixed; avoid text if varchar suffices; use decimal or bigint for precise monetary values; prefer timestamp over datetime for space efficiency and automatic UTC handling.

② Avoid NULL

NULL values still occupy space and complicate indexes. Replace NULL with meaningful defaults whenever possible.

③ Text Type Optimisation

Large TEXT columns increase table size quickly; store them in a separate table and join by the business key.

Index Optimisation

Index Types

Ordinary index – basic.

Composite index – multiple columns, speeds up multi‑column predicates.

Unique index – enforces uniqueness, allows NULL.

Composite unique index – uniqueness across a column set.

Primary key – unique, non‑NULL identifier.

Full‑text index – for large text search (MySQL 5.6+), though many prefer Elasticsearch.

Optimisation Principles

Paginate queries; if >30 % of rows are scanned, the index is ignored.

Limit indexes per table to ≤5 and fields per index to ≤5.

Use prefix indexes for strings (5‑8 characters).

Avoid low‑cardinality indexes (e.g., gender, deleted flag).

Consider covering indexes when possible.

Example of a covering index:

select login_name, nick_name from member where login_name = ?

SQL Optimisation

① Batch Processing

Large updates should be broken into batches to avoid blocking other queries.

update status=0 FROM `coupon` WHERE expire_date <= #{currentDate} and status=1;

Batch pseudocode:

int pageNo = 1;
int PAGE_SIZE = 100;
while (true) {
    List
batchIdList = queryList('select id FROM `coupon` WHERE expire_date <= #{currentDate} and status = 1 limit #{(pageNo-1) * PAGE_SIZE}, #{PAGE_SIZE}');
    if (CollectionUtils.isEmpty(batchIdList)) {
        return;
    }
    update('update status = 0 FROM `coupon` where status = 1 and id in #{batchIdList}');
    pageNo++;
}

② <> Optimisation

The <> operator often prevents index usage. Example:

select id from orders where amount != 100;

If the distribution is highly skewed, a UNION of two range queries may help.

(select id from orders where amount > 100)
union all
(select id from orders where amount < 100 and amount > 0)

③ OR Optimisation

OR on different columns disables composite indexes. Rewrite using UNION:

select id, product_name from orders where mobile_no = '13421800407'
union
select id, product_name from orders where user_id = 100;

④ IN Optimisation

IN works well when the master table is large and the sub‑table is small; otherwise, JOIN may be preferable.

select id from orders where user_id in (select id from user where level = 'VIP');
-- rewritten as
select o.id from orders o left join user u on o.user_id = u.id where u.level = 'VIP';

⑤ Avoid SELECT *

Fetching all columns forces a full‑table scan and prevents index usage.

⑥ LIKE Optimisation

Leading wildcards disable index usage. Use a trailing wildcard or full‑text index instead.

SELECT column FROM table WHERE field LIKE 'keyword%';

⑦ JOIN Optimisation

Prefer Nested Loop Join with the smaller result set as the driving table, add indexes on join columns, and increase join_buffer_size if needed. Avoid joining more than three tables.

⑧ LIMIT Optimisation

Deep pagination degrades performance. Reduce the scanned range by first selecting the IDs, then fetching the rows.

select * from orders where id > (select id from orders order by id desc limit 1000000, 1) order by id desc limit 0,10;

If only the primary key is needed, use a simple range query:

select id from orders where id between 1000000 and 1000010 order by id desc;

Other Databases

Backend developers should master MySQL or SQL Server as the core storage engine, while also being aware of mature NoSQL solutions for scenarios where they solve specific performance bottlenecks.

Author: 编码砖家 | Editor: 陶家龙

performanceQuery OptimizationMySQLDatabase DesignIndex OptimizationSQL Tuning
Laravel Tech Community
Written by

Laravel Tech Community

Specializing in Laravel development, we continuously publish fresh content and grow alongside the elegant, stable Laravel framework.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.