Improving Database Auto‑Increment Primary Keys for Distributed ID Generation
This article examines the limitations of using database auto‑increment primary keys for distributed ID generation and presents a segment‑allocation approach, discusses concurrency challenges, and introduces a double‑buffer strategy to reduce database load and improve fault tolerance.
In the previous article the author introduced using a database's auto‑increment primary key to generate distributed IDs, which works well for short user IDs but has serious drawbacks such as difficulty expanding step size and heavy database pressure.
The main problems are:
Once the step size is set, it is hard to expand.
The database becomes a bottleneck because each ID request requires a round‑trip to the DB.
To mitigate these issues, the article proposes allocating ID ranges (segments) from the database instead of a single ID per request. The ID table contains fields like id, biz_tag, max_id, step, desc, and update_time. When a service needs IDs it requests a segment (e.g., max_id = 0, step = 1000), receives the range [1,1000], and stores it in JVM memory, using AtomicLong.getAndIncrement() to consume IDs locally.
When the range is exhausted, the service requests the next segment ( max_id = 1000, step = 1000 → [1001,2000]). This reduces DB load and allows the system to continue operating for a period even if the database is down.
The article also addresses concurrency (competition) problems: multiple services may fetch the same max_id simultaneously, causing duplicate IDs. Solutions include using a distributed lock, database row‑level locks, or transactional row locking to ensure only one service updates max_id at a time.
Another issue is burst blocking when many services exhaust their segments at the same moment, causing a cascade of blocked requests. To solve this, a double‑buffer scheme is introduced. Two buffers (buffer1 and buffer2) hold ID segments; when one buffer reaches 10% usage, a background thread fetches a new segment for the other buffer. The system automatically switches buffers when one is depleted, ensuring continuous ID supply without hitting the database.
The double‑buffer design thus smooths out spikes, reduces latency, and improves fault tolerance.
In summary, the segment‑allocation with double‑buffer approach is the distributed ID algorithm used by Meituan, offering configurable step sizes, automatic monitoring of ID usage, and resilience against database failures, while noting that the IDs are highly sequential and may not be suitable for order IDs that require unpredictability.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
