Databases 9 min read

Why LIMIT Pagination Slows Down on Large Tables and How to Optimize It

This tutorial demonstrates how MySQL LIMIT pagination performs on tables with millions of rows, measures query times for various offsets and batch sizes, and presents practical optimization techniques such as sub‑queries, ID‑range filtering, and column selection to improve speed.

macrozheng
macrozheng
macrozheng
Why LIMIT Pagination Slows Down on Large Tables and How to Optimize It

The article begins with an interview scenario where a candidate is asked how to query a table containing ten million rows, leading to a practical demonstration using MySQL 5.7.26.

Data Preparation

Since generating ten million rows directly is impractical, a stored procedure is used to batch‑insert rows, committing every 1,000 rows to speed up the process.

CREATE PROCEDURE batch_insert_log()
BEGIN
  DECLARE i INT DEFAULT 1;
  DECLARE userId INT DEFAULT 10000000;
  SET @execSql = 'INSERT INTO `test`.`user_operation_log`(`user_id`, `ip`, `op_data`, `attr1`, `attr2`, `attr3`, `attr4`, `attr5`, `attr6`, `attr7`, `attr8`, `attr9`, `attr10`, `attr11`, `attr12`) VALUES';
  SET @execData = '';
  WHILE i <= 10000000 DO
    SET @attr = "'测试很长很长...的属性'";
    SET @execData = CONCAT(@execData, "(", userId + i, ", '10.0.69.175', '用户登录操作'", ",", @attr, ",", @attr, ",", @attr, ",", @attr, ",", @attr, ",", @attr, ",", @attr, ",", @attr, ",", @attr, ",", @attr, ")");
    IF i % 1000 = 0 THEN
      SET @stmtSql = CONCAT(@execSql, @execData, ';');
      PREPARE stmt FROM @stmtSql;
      EXECUTE stmt;
      DEALLOCATE PREPARE stmt;
      COMMIT;
      SET @execData = '';
    ELSE
      SET @execData = CONCAT(@execData, ',');
    END IF;
    SET i = i + 1;
  END WHILE;
END;;
DELIMITER ;

Test Execution

On a low‑end Windows 10 laptop (i5, ~500 MB SSD), 3,148,000 rows were inserted (≈5 GB without indexes). The following timings were recorded:

Three count queries: ~14 s each.

Standard LIMIT pagination (offset 10,000, rows 10): 49‑60 ms per run.

Impact of Data Volume

Running the same offset with increasing row counts shows a linear increase in query time, confirming that larger result sets take longer.

Impact of Offset Size

Keeping the result size constant while increasing the offset demonstrates that larger offsets also increase latency, because MySQL must scan and discard more rows.

Optimization Strategies

Reduce Large Offsets

Two approaches are presented:

Sub‑query to locate the start ID : First fetch the ID at the desired offset, then query rows where id >= fetched_id with a small LIMIT. This leverages the primary‑key index and avoids scanning discarded rows.

ID range (BETWEEN) filtering : When IDs are sequential, compute the start and end IDs and query with WHERE id BETWEEN start AND end LIMIT n. This is the fastest method because it directly uses the index.

Reduce Result Set Size

Fetching only required columns instead of SELECT * dramatically cuts network and parsing overhead. Experiments show that selecting a subset of columns reduces query time compared to selecting all columns, especially when the table contains large text fields.

Why Avoid SELECT *

Using SELECT * forces MySQL to resolve metadata for every column, increases parsing work, and may transmit unnecessary large fields (e.g., logs, blobs), leading to higher CPU and network costs.

Conclusion

The author encourages readers to replicate the experiments, noting that results may vary on different hardware or when client and server are on separate machines.

Key takeaways: large offsets and large result sets degrade pagination performance; using indexed ID lookups, limiting columns, and avoiding SELECT * are effective mitigations.

PerformanceOptimizationSQLMySQLPaginationLIMITlarge datasets
macrozheng
Written by

macrozheng

Dedicated to Java tech sharing and dissecting top open-source projects. Topics include Spring Boot, Spring Cloud, Docker, Kubernetes and more. Author’s GitHub project “mall” has 50K+ stars.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.