Databases 11 min read

Limiting Table Row Count in MySQL: Trigger, Partition, and Tablespace Solutions

This article explores multiple MySQL techniques—triggers with a counter table, range partitioning, general tablespace limits, and application‑side buffering—to enforce a hard cap of 10,000 rows per table, discussing implementation steps, code examples, and the trade‑offs of each method.

Aikesheng Open Source Community
Aikesheng Open Source Community
Aikesheng Open Source Community
Limiting Table Row Count in MySQL: Trigger, Partition, and Tablespace Solutions

Background

The author was asked by a client whether a single MySQL table could be limited to a fixed number of records (e.g., 10,000 rows) so that further inserts are rejected or cause an error.

Main Content

There are generally two categories of solutions: one implemented on the database side and the other on the application side.

Database‑side solutions (hard limit of 10,000 rows)

1. Trigger solution

The idea is to use a trigger that checks a counter table before each insert; if the limit is reached, the oldest row is deleted (or inserted then deleted) to keep the total at the limit.

mysql:ytt_new>create table t1(id int auto_increment primary key, r1 int);
Query OK, 0 rows affected (0.06 sec)

mysql:ytt_new>create table t1_count(cnt smallint unsigned);
Query OK, 0 rows affected (0.04 sec)

mysql:ytt_new>insert t1_count set cnt=0;
Query OK, 1 row affected (0.11 sec)

Insert‑trigger to increment the counter:

DELIMITER $$
USE `ytt_new`$$
DROP TRIGGER IF EXISTS `tr_t1_insert`$$
CREATE TRIGGER `tr_t1_insert` AFTER INSERT ON `t1`
FOR EACH ROW BEGIN
   UPDATE t1_count SET cnt = cnt + 1;
END$$
DELIMITER ;

Delete‑trigger to decrement the counter:

DELIMITER $$
USE `ytt_new`$$
DROP TRIGGER IF EXISTS `tr_t1_delete`$$
CREATE TRIGGER `tr_t1_delete` AFTER DELETE ON `t1`
FOR EACH ROW BEGIN
   UPDATE t1_count SET cnt = cnt - 1;
END$$
DELIMITER ;

Populate the table with 10,000 rows for testing:

mysql:ytt_new>insert t1 (r1) with recursive tmp(a,b) as (select 1,1 union all select a+1,ceil(rand()*20) from tmp where a<10000 ) select b from tmp;
Query OK, 10000 rows affected (0.68 sec)

Check the counter:

mysql:ytt_new>select cnt from t1_count;
+-------+
| cnt   |
+-------+
| 10000 |
+-------+

A stored procedure can encapsulate the logic, deleting the oldest row when the limit is reached before inserting a new one:

DELIMITER $$
USE `ytt_new`$$
DROP PROCEDURE IF EXISTS `sp_insert_t1`$$
CREATE PROCEDURE `sp_insert_t1`(IN f_r1 INT)
BEGIN
  DECLARE v_cnt INT DEFAULT 0;
  SELECT cnt INTO v_cnt FROM t1_count;
  IF v_cnt >= 10000 THEN
    DELETE FROM t1 ORDER BY id ASC LIMIT 1;
  END IF;
  INSERT INTO t1(r1) VALUES (f_r1);
END$$
DELIMITER ;

Calling the procedure keeps the row count at 10,000:

mysql:ytt_new>call sp_insert_t1(9999);
Query OK, 1 row affected (0.02 sec)

mysql:ytt_new>select count(*) from t1;
+----------+
| count(*) |
+----------+
|    10000 |
+----------+

Drawbacks of the trigger approach include: counter desynchronization on TRUNCATE, need to recreate triggers after DROP, and the requirement that all inserts go through the stored procedure.

2. Partition table solution

Create a RANGE partition where the first partition holds up to 10,000 rows; when full, drop that partition and reorganize the table.

mysql:ytt_new>create table t1(id int auto_increment primary key, r1 int) partition by range(id) (partition p1 values less than(10001), partition p_max values less than(maxvalue));
Query OK, 0 rows affected (0.45 sec)

Check if the first partition is full:

mysql:ytt_new>select count(*) from t1 partition(p1);
+----------+
| count(*) |
+----------+
|    10000 |
+----------+

Drop the full partition and reorganize:

mysql:ytt_new>alter table t1 drop partition p1;
Query OK, 0 rows affected (0.06 sec)

mysql:ytt_new>alter table t1 reorganize partition p_max into (partition p1 values less than (20001), partition p_max values less than (maxvalue));
Query OK, 0 rows affected (0.60 sec)

Advantages: inserts can come from any source, and dropping a partition is fast. Disadvantages: gaps in the primary key require redefining partitions.

3. General tablespace solution

Pre‑calculate the disk space needed for 10,000 rows, create a dedicated tablespace, and assign the table to it. When the tablespace reports “TABLE IS FULL”, stop inserts.

mysql:ytt_new>create tablespace ts1 add datafile '/tmp/mysql/ts1.ibd' engine innodb;
Query OK, 0 rows affected (0.11 sec)

mysql:ytt_new>alter table t1 tablespace ts1;
Query OK, 0 rows affected (0.12 sec)

Attempting to insert after the limit triggers an error:

mysql:ytt_new>insert t1 (r1) values (200);
ERROR 1114 (HY000): The table 't1' is full

To reuse the table, drop the tablespace and truncate the table:

mysql:ytt_new>alter table t1 tablespace innodb_file_per_table;
Query OK, 0 rows affected (0.18 sec)

mysql:ytt_new>drop tablespace ts1;
Query OK, 0 rows affected (0.13 sec)

mysql:ytt_new>truncate table t1;
Query OK, 0 rows affected (0.04 sec)

Application‑side solution

Cache incoming rows in a file (e.g., t1.csv); when the file reaches 10,000 lines, truncate the table on the DB side and bulk‑load the file.

Conclusion

In the MyISAM era MySQL offered a max_rows table attribute, but it was only advisory. InnoDB provides no built‑in hard limit, so the approaches above—triggers with a counter table, range partitioning, dedicated tablespaces, or application‑side buffering—are the practical ways to enforce a fixed row count, with the choice depending on specific requirements.

mysqlDatabase DesignpartitioningTablespaceTriggersRow Limit
Aikesheng Open Source Community
Written by

Aikesheng Open Source Community

The Aikesheng Open Source Community provides stable, enterprise‑grade MySQL open‑source tools and services, releases a premium open‑source component each year (1024), and continuously operates and maintains them.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.