Designing a High‑Concurrency Ticket‑Seckill System with Load Balancing, Pre‑Deduction, and Go Implementation
This article analyzes the challenges of handling millions of simultaneous train‑ticket purchase requests, presents a multi‑layer load‑balancing architecture, introduces a pre‑deduction inventory strategy using Redis and local memory, and demonstrates a complete Go implementation with performance testing and key architectural insights.
During holidays, millions of users compete for train tickets, creating a classic flash‑sale (seckill) scenario where the 12306 service must handle extreme QPS that surpass any typical second‑kill system.
The author examines the 12306 backend architecture, highlighting three layers of load balancing: OSPF routing, LVS (Linux Virtual Server), and Nginx weighted round‑robin, and explains how these layers distribute traffic across a large cluster.
To avoid overselling and underselling tickets, the article proposes a three‑stage order flow (create order → deduct inventory → user payment) and evaluates two inventory‑deduction approaches: immediate deduction (order‑first) and pre‑deduction (reserve inventory first). The pre‑deduction method stores a buffer of tickets locally on each server and synchronizes with a central Redis hash, ensuring high performance while preventing stock loss.
Redis is used as the central inventory store because of its sub‑millisecond latency and ability to handle >100k QPS. A Lua script guarantees atomic check‑and‑decrement operations:
local ticket_key = KEYS[1]
local ticket_total_key = ARGV[1]
local ticket_sold_key = ARGV[2]
local ticket_total_nums = tonumber(redis.call('HGET', ticket_key, ticket_total_key))
local ticket_sold_nums = tonumber(redis.call('HGET', ticket_key, ticket_sold_key))
if ticket_total_nums >= ticket_sold_nums then
return redis.call('HINCRBY', ticket_key, ticket_sold_key, 1)
end
return 0The Go service initializes local stock, a Redis connection pool (using Redigo), and a channel‑based lock to serialize critical sections. Sample Go code for local stock deduction:
func (spike *LocalSpike) LocalDeductionStock() bool {
spike.LocalSalesVolume++
return spike.LocalSalesVolume < spike.LocalInStock
}The HTTP handler combines local and remote deductions, returns JSON responses, and logs results to ./stat.log . Performance testing with ApacheBench shows the single‑node service handling over 4,000 requests per second, confirming the effectiveness of the design.
Key takeaways include the importance of load balancing to split traffic, leveraging in‑memory pre‑deduction to avoid database bottlenecks, and using asynchronous order creation via message queues. The architecture also tolerates node failures by reserving buffer inventory on each server.
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.