Backend Development 12 min read

Designing Short Numeric Account IDs Using MySQL Auto‑Increment and Segment Allocation

This article explores the problem of generating short, sequential numeric account IDs, evaluates the classic Snowflake algorithm, proposes a MySQL auto‑increment based solution with a dedicated ID‑generation table, analyzes deadlock issues, and finally presents a segment‑based ID allocation scheme that reduces waste and supports scaling.

Architecture Digest
Architecture Digest
Architecture Digest
Designing Short Numeric Account IDs Using MySQL Auto‑Increment and Segment Allocation

When a project required numeric account IDs that are short, easy to input, and sequential, the initial idea was to use Twitter's Snowflake algorithm, but its 64‑bit IDs were too long for the business needs.

The team then tried a MySQL‑based approach by creating a dedicated account ID generation table with an AUTO_INCREMENT column. Each insertion used REPLACE INTO on a row with a constant stub value, ensuring the table always contained a single row and the id column grew with each request.

CREATE TABLE `tbl_accid` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `stub` char(1) NOT NULL DEFAULT '' UNIQUE,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

However, under concurrent load this pattern caused InnoDB deadlocks because REPLACE INTO is internally decomposed into a DELETE followed by an INSERT . Tests with mysqlslap showed deadlocks even with just two concurrent threads.

To avoid the deadlock, the team considered alternative strategies:

Using multiple MySQL instances with different auto_increment_increment and auto_increment_offset values, but these settings are global/session scoped and cannot be applied per table.

Applying a read‑write split where only one table is writable for ID generation while the others are read‑only shards, requiring periodic data migration.

Pre‑allocating a batch of IDs (segment allocation) similar to Meituan's MT‑Leaf, reducing the number of insert operations.

The final solution combined the batch‑allocation idea with a dedicated tbl_account_freeid table that stores a segment number for each login server instance. Each segment represents a range of 1,000 IDs, and the server consumes IDs from its in‑memory segment until exhausted, then inserts a new row to obtain the next segment.

CREATE TABLE `tbl_account_freeid` (
  `instance_id` int NOT NULL,
  `segment` bigint NOT NULL,
  `left` int NOT NULL DEFAULT 0,
  PRIMARY KEY (`instance_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

When a server restarts, it reads its last segment and the remaining left count; if left is zero, it requests a new segment. This design eliminates deadlocks, keeps IDs short and sequential, and tolerates server scaling, while minimizing ID waste by writing back the unused count on graceful shutdown.

In production the scheme proved stable, and the article invites readers to share any further improvements.

backendperformanceMySQLauto-incrementID generationshort IDs
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.