How to Boost E‑commerce Inventory Performance Without Sacrificing Consistency
This article explains how the inventory service at a large e‑commerce platform was optimized for high‑traffic sales events by redesigning transaction handling, sharding data by SKU, and combining flexible and rigid transactions to improve throughput while preserving data consistency.
Background
The inventory‑center service hit a performance bottleneck during the 2018 big‑sale event. Although a series of degradation measures allowed it to survive, concerns remained about stability for the next year, prompting a performance‑optimization project in early 2019.
The front‑end system depends on the inventory service for both query and update traffic. Query traffic usually uses caching, while update traffic follows three typical flows: user clicks order → inventory is reserved; user pays → inventory is deducted; user cancels → reserved inventory is released.
Why Scaling Alone Won’t Solve the Problem
E‑commerce platforms typically use a distributed database with a sharding key such as store ID or user ID, which avoids data contention. However, inventory is a shared resource across all orders and SKUs, leading to heavy lock contention and low throughput when multiple orders compete for the same SKU.
When a distributed database shards by skuId, different SKUs land on different nodes, but inventory deduction still requires XA transactions to keep data consistent. XA transactions are expensive and exacerbate performance degradation during high‑traffic promotions.
Industry Solutions Worth Considering
Several common approaches are listed:
A. Keep Inventory in Memory (Cache)
Caching offers orders‑of‑magnitude speed gains but lacks transactional guarantees, so the application must ensure consistency, which is difficult in the current system.
B. Non‑Transactional Execution
Running without transactions improves performance but suffers from the same consistency risks as caching.
C. Use a Faster Database
Some platforms adopt Oracle or custom‑built databases for better performance, though no such plan exists here.
D. Business‑Level Sharding
Splitting by store works for C2C platforms where SKUs are store‑specific, but it is unsuitable for a single‑store model like this platform.
How the Inventory Center Was Optimized
To balance consistency and performance, the team introduced a “flexible transaction” model. Rigid transactions are split to the SKU level, committing immediately after each SKU’s deduction. A global flexible transaction coordinates the whole order, rolling back any previously deducted SKUs if any SKU fails due to insufficient stock.
This design reduces lock waiting to only the competing SKU, eliminating cross‑SKU contention.
When traffic grows beyond a single node’s capacity, the inventory data model is sharded by skuId and migrated to a distributed database (DDB). Because each SKU’s transaction stays on its own DDB node, XA transactions are avoided. The transaction component itself is also sharded, ensuring it resides on the same node as the business data, allowing the service to scale with DDB expansion.
Conclusion
Multiple performance‑optimization ideas exist for inventory services; the appropriate solution depends on the current business context. While caching‑based deduction could be considered in the future, it introduces consistency risks that must be mitigated before adoption. A well‑chosen, balanced approach often delivers more practical value than chasing a single optimal metric.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Yanxuan Tech Team
NetEase Yanxuan Tech Team shares e-commerce tech insights and quality finds for mindful living. This is the public portal for NetEase Yanxuan's technology and product teams, featuring weekly tech articles, team activities, and job postings.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
