Databases 11 min read

Why MySQL LIMIT Slows Down on Millions of Rows—and How to Fix It

Through hands‑on experiments with a 10‑million‑row MySQL 5.7 table, this article shows how standard LIMIT pagination becomes slower as offset and data volume grow, then presents practical optimizations—sub‑queries, ID‑based ranges, and column selection—to dramatically improve query performance.

MaGe Linux Operations
MaGe Linux Operations
MaGe Linux Operations
Why MySQL LIMIT Slows Down on Millions of Rows—and How to Fix It

Preface

Interviewer asks: "How would you query ten million rows?" The answer is to use LIMIT pagination, and the author confirms having done it.

Data Preparation

To create the test data, a table user_operation_log is defined and a stored procedure batch_insert_log() inserts ten million rows in batches of 1,000 with a commit after each batch.

CREATE TABLE `user_operation_log` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `user_id` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `ip` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `op_data` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `attr1` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `attr2` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `attr3` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `attr4` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `attr5` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `attr6` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `attr7` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `attr8` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `attr9` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `attr10` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `attr11` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `attr12` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  PRIMARY KEY (`id`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=1 CHARACTER SET=utf8mb4 COLLATE=utf8mb4_general_ci ROW_FORMAT=Dynamic;
DELIMITER ;;
CREATE PROCEDURE batch_insert_log()
BEGIN
  DECLARE i INT DEFAULT 1;
  DECLARE userId INT DEFAULT 10000000;
  SET @execSql = 'INSERT INTO `test`.`user_operation_log`(`user_id`, `ip`, `op_data`, `attr1`, `attr2`, `attr3`, `attr4`, `attr5`, `attr6`, `attr7`, `attr8`, `attr9`, `attr10`, `attr11`, `attr12`) VALUES';
  SET @execData = '';
  WHILE i <= 10000000 DO
    SET @attr = '"测试很长很长很长很长很长很长很长很长很长很长很长很长很长很长很长很长很长的属性"';
    SET @execData = CONCAT(@execData, "(", userId + i, ", '10.0.69.175', '用户登录操作', ", @attr, ",", @attr, ",", @attr, ",", @attr, ",", @attr, ",", @attr, ",", @attr, ",", @attr, ",", @attr, ",", @attr, ")");
    IF i % 1000 = 0 THEN
      SET @stmtSql = CONCAT(@execSql, @execData, ";");
      PREPARE stmt FROM @stmtSql;
      EXECUTE stmt;
      DEALLOCATE PREPARE stmt;
      COMMIT;
      SET @execData = '';
    ELSE
      SET @execData = CONCAT(@execData, ",");
    END IF;
    SET i = i + 1;
  END WHILE;
END;;
DELIMITER ;

Start Testing

The test machine is a low‑end Windows 10 laptop with an i5 CPU and a ~500 MB SSD. Only 3,148,000 rows were inserted (about 5 GB without indexes), taking 38 minutes. SELECT count(1) FROM `user_operation_log`; Result: 3,148,000 rows.

Three count queries took:

14060 ms

13755 ms

13447 ms

Normal Pagination Queries

MySQL LIMIT syntax:

SELECT * FROM table LIMIT [offset,] rows | rows OFFSET offset

First parameter = offset

Second parameter = number of rows

Test query:

SELECT * FROM `user_operation_log` LIMIT 10000, 10;

Three runs took:

59 ms

49 ms

50 ms

Same Offset, Different Data Volumes

SELECT * FROM `user_operation_log` LIMIT 10000, 10;
SELECT * FROM `user_operation_log` LIMIT 10000, 100;
SELECT * FROM `user_operation_log` LIMIT 10000, 1000;
SELECT * FROM `user_operation_log` LIMIT 10000, 10000;
SELECT * FROM `user_operation_log` LIMIT 10000, 100000;
SELECT * FROM `user_operation_log` LIMIT 10000, 1000000;

Query times increase with the number of rows returned (see chart).

Same Data Volume, Different Offsets

SELECT * FROM `user_operation_log` LIMIT 100, 100;
SELECT * FROM `user_operation_log` LIMIT 1000, 100;
SELECT * FROM `user_operation_log` LIMIT 10000, 100;
SELECT * FROM `user_operation_log` LIMIT 100000, 100;
SELECT * FROM `user_operation_log` LIMIT 1000000, 100;

Times grow as the offset becomes larger (see chart).

How to Optimize

Two problems are addressed: large offsets and large data volumes.

Optimizing Large Offsets

Sub‑query Method

First locate the id at the desired offset, then fetch rows using that id:

SELECT * FROM `user_operation_log` LIMIT 1000000, 10;
SELECT id FROM `user_operation_log` LIMIT 1000000, 1;
SELECT * FROM `user_operation_log` WHERE id >= (SELECT id FROM `user_operation_log` LIMIT 1000000, 1) LIMIT 10;

Result shows the first query is slow, while the sub‑query with index is faster.

Note: Some MySQL versions do not support LIMIT inside IN, so nested SELECTs are used.
SELECT * FROM `user_operation_log` WHERE id IN (SELECT t.id FROM (SELECT id FROM `user_operation_log` LIMIT 1000000, 10) AS t);

ID Range Method

When ids are continuous, use BETWEEN or a simple >= condition:

SELECT * FROM `user_operation_log` WHERE id BETWEEN 1000000 AND 1000100 LIMIT 100;
SELECT * FROM `user_operation_log` WHERE id >= 1000000 LIMIT 100;

These queries run much faster (see chart).

Note: This LIMIT only restricts the number of rows, not the offset.

Optimizing Large Data Volumes

Fetching fewer columns reduces I/O:

SELECT * FROM `user_operation_log` LIMIT 1, 1000000;
SELECT id FROM `user_operation_log` LIMIT 1, 1000000;
SELECT id, user_id, ip, op_data, attr1, attr2, attr3, attr4, attr5, attr6, attr7, attr8, attr9, attr10, attr11, attr12 FROM `user_operation_log` LIMIT 1, 1000000;

Results show that selecting only needed columns speeds up the query noticeably.

Is SELECT * Really a Good Idea?

Using SELECT * forces the database to parse all column metadata, increases network traffic, and may pull large, unnecessary fields (e.g., logs or large text blobs), especially when client and server are on different machines.

Conclusion

Readers are encouraged to try the scripts themselves for deeper insights.

MySQLlarge datasets
MaGe Linux Operations
Written by

MaGe Linux Operations

Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.