Boost Order Processing Speed with Segmented Locks and Redis

This article explains how to use segmented (sharded) locks and Redis‑based routing strategies to parallelize inventory deduction and order creation, dramatically increasing orders processed per second while maintaining atomicity and fault tolerance.

Lobster Programming
Lobster Programming
Lobster Programming
Boost Order Processing Speed with Segmented Locks and Redis

1. Segmented Lock Concept

To increase user activity or product popularity, merchants often launch low‑price promotions with limited stock. When inventory is scarce, improving the server's ability to handle orders per unit time becomes essential.

During order placement, the system first checks inventory (e.g., 50 ms), then deducts stock (100 ms), and finally creates the order (150 ms), totaling 300 ms per order, which limits processing to about three orders per second.

By parallelizing stock deduction and order creation after confirming sufficient inventory, the processing time drops to 200 ms, raising throughput to five orders per second.

However, the distributed lock used for inventory verification becomes a bottleneck because every request must acquire the same lock before proceeding.

Segmented (sharded) locks split the single large lock into n smaller locks, effectively turning one lock into many. Requests are routed to different inventory segments (e.g., stock_1, stock_2). Each segment checks its own stock and, if sufficient, allows the order.

With the original single lock handling five orders per second, splitting into n segments theoretically raises capacity to 5 × n orders per second.

2. Redis Implementation of Segmented Locks

Segmented locks divide a critical resource (inventory) into smaller parts. To achieve a target of 500 orders per second, the system can create 100 segments (since the original capacity is 5 orders/second).

Each segment's remaining stock is stored in Redis with keys like business:stock:{item_id}:{segment_id} and the value representing the stock count for that segment.

When a user places an order, the client uses the same key as the lock identifier and a unique client ID as the lock value, ensuring only one client can process a given segment at a time.

2.2 Routing Strategies

2.2.1 Hash Routing

Map a user’s ID hash to a specific lock segment, directing the request to the appropriate segment.

2.2.2 Round‑Robin Routing

Distribute requests evenly across all segments to avoid hot spots.

2.2.3 Random Routing

Randomly select a segment for each request, which may cause data skew if some segments become overloaded.

2.2.4 Range Routing

Maintain a mapping table that links user ID ranges to specific lock segments, ensuring deterministic routing.

Summary :

Segmented locks split a single large lock into multiple smaller locks to improve concurrency.

When using segmented locks, ensure each lock remains atomic, mutually exclusive, and fault‑tolerant.

concurrencyRedisorder processingDistributed Lockrouting strategysegmented lock
Lobster Programming
Written by

Lobster Programming

Sharing insights on technical analysis and exchange, making life better through technology.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.