Databases 9 min read

When MySQL Auto‑Increment Hits INT Limit: Diagnosis and Fixes

The article recounts a MySQL production incident where an INT auto‑increment column reached its maximum value, causing insert failures, and walks through analysis, three remediation options, a stored‑procedure cleanup, a conversion to BIGINT, performance monitoring, and lessons on concurrency and schema design.

dbaplus Community
dbaplus Community
dbaplus Community
When MySQL Auto‑Increment Hits INT Limit: Diagnosis and Fixes

Story Background

Operations reported a device that could not be found in the backend. The author suspected data corruption because the service threw errors and failed to write to the database. Logs showed no program errors; the logic first stores a unique ID in Redis, inserting into MySQL if the key is absent, otherwise updating. After deleting the Redis key and rerunning, an INSERT statement was printed but no row appeared, indicating a deeper issue.

Problem Analysis

The table holds over 5 × 10⁹ rows. The author initially thought the MySQL table size might be the bottleneck, but size limits were not the cause. A screenshot of the table showed the auto‑increment column value 2 147 483 647, which is the maximum for a signed INT. The column was defined as INT (length 50, which is ineffective), so inserts failed once the value exceeded the INT range.

Further investigation revealed that the auto‑increment IDs were not continuous because a high‑performance ID generation strategy was enabled, causing gaps under high concurrency.

Proposed Solutions (from DeepSeek)

Change the column type to BIGINT without migrating data (simple but locks the table).

Adopt a distributed ID system, requiring redesign of the table, data migration, and supporting infrastructure such as Redis.

Implement sharding (database and table partitioning) and introduce Elasticsearch with synchronization middleware (logstash, etc.).

The author considered the first option but noted that locking the table for a long time was undesirable. Observations showed that the ID gaps grew with higher concurrency (30 consumer threads). Reducing the consumer count to 15 decreased the gaps.

Stored Procedure for Cleaning ID Gaps

BEGIN
  DECLARE start_id INT DEFAULT 1;
  DECLARE end_id INT DEFAULT 100000;
  DECLARE current_batch INT DEFAULT 0;
  WHILE start_id <= end_id DO
    -- Update temporary table ID
    UPDATE table
    SET id = start_id + 1
    WHERE id = (SELECT original_id FROM (
      SELECT id AS original_id FROM table ORDER BY id DESC LIMIT 1) AS test);
    SET start_id = start_id + 1;
  END WHILE;
END

After cleaning about five million ID ranges, the author suspected that high concurrency caused the large gaps. Initially a single‑threaded consumer processed 500 records per batch; later it was switched to 30 parallel Java consumers, then to batch processing with 30 threads.

Changing the Column to BIGINT

The team finally altered the column definition:

ALTER TABLE xxx MODIFY id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT;

Explanation: UNSIGNED doubles the positive range, NOT NULL enforces presence, and AUTO_INCREMENT continues the sequence.

In a test environment with 100 million rows, the alteration took about one hour; in production it was estimated to require 6–7 hours. The author used a performance‑schema query to monitor progress:

SELECT EVENT_NAME, WORK_COMPLETED, WORK_ESTIMATED,
       ROUND(WORK_COMPLETED/WORK_ESTIMATED*100, 2) AS "Progress (%)"
FROM performance_schema.events_stages_current;

The query showed less than 50 % progress after many hours, with the operation slowing down due to larger index trees and CPU contention from other threads. Roughly 1.5 % of rows were processed per hour.

Final Outcome

The alteration completed after a little over two days, finishing on a Monday night and completing by Thursday morning.

Key Takeaways

Design schemas with sufficient ID range; using INT for billions of rows can lead to overflow.

High‑performance auto‑increment strategies can create large ID gaps under heavy concurrency.

Multi‑threaded batch consumption improves throughput but may exacerbate ID consumption.

Converting to BIGINT UNSIGNED resolves overflow but requires careful planning due to table‑locking.

Monitoring long‑running DDL with performance‑schema queries helps gauge progress and identify bottlenecks.

image.png
image.png
concurrencyMySQLBIGINTauto_incrementDatabase PerformanceStored ProcedureINT overflow
dbaplus Community
Written by

dbaplus Community

Enterprise-level professional community for Database, BigData, and AIOps. Daily original articles, weekly online tech talks, monthly offline salons, and quarterly XCOPS&DAMS conferences—delivered by industry experts.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.