Designing a High‑Concurrency Ticket Flash‑Sale System with Load Balancing, Nginx Weighted Round‑Robin, and Go
This article explains how to build a high‑concurrency ticket‑seckill system that can handle one million users buying ten thousand tickets simultaneously by using distributed load‑balancing, Nginx weighted round‑robin, Go‑based HTTP services, Redis atomic stock deduction, and practical performance testing.
During holidays, millions of users compete for train tickets, pushing the 12306 service to extreme QPS levels; the author studies its architecture and demonstrates a prototype that can serve 1 000 000 concurrent users buying 10 000 tickets.
High‑Concurrency Architecture – The system adopts a distributed cluster with multiple layers of load balancers (OSPF, LVS, Nginx) and disaster‑recovery mechanisms (dual data centers, node fault‑tolerance) to ensure high availability.
Load‑Balancing Overview – Three types of load balancing are introduced: OSPF (cost‑based routing), LVS (IP virtual server), and Nginx (round‑robin, weighted round‑robin, IP‑hash). The article provides a simple diagram of the three‑layer flow.
Nginx Weighted Round‑Robin Demo
#配置负载均衡
upstream load_rule {
server 127.0.0.1:3001 weight=1;
server 127.0.0.1:3002 weight=2;
server 127.0.0.1:3003 weight=3;
server 127.0.0.1:3004 weight=4;
}
...
server {
listen 80;
server_name load_balance.com www.load_balance.com;
location / {
proxy_pass http://load_rule;
}
}The author then creates four Go services listening on ports 3001‑3004 and uses ab to verify that request distribution matches the configured weights (100, 200, 300, 400).
Spike System Design – Three stock‑deduction strategies are compared: (1) order‑then‑stock, (2) payment‑then‑stock, and (3) pre‑deduction with asynchronous order creation. The pre‑deduction approach is chosen for its ability to avoid DB I/O and reduce oversell/undersell risks.
Local Stock Deduction
package localSpike
//本地扣库存,返回bool值
func (spike *LocalSpike) LocalDeductionStock() bool {
spike.LocalSalesVolume = spike.LocalSalesVolume + 1
return spike.LocalSalesVolume < spike.LocalInStock
}Remote Stock Deduction (Redis + Lua)
package remoteSpike
const LuaScript = `
local ticket_key = KEYS[1]
local ticket_total_key = ARGV[1]
local ticket_sold_key = ARGV[2]
local ticket_total_nums = tonumber(redis.call('HGET', ticket_key, ticket_total_key))
local ticket_sold_nums = tonumber(redis.call('HGET', ticket_key, ticket_sold_key))
if (ticket_total_nums >= ticket_sold_nums) then
return redis.call('HINCRBY', ticket_key, ticket_sold_key, 1)
end
return 0
`
func (RemoteSpikeKeys *RemoteSpikeKeys) RemoteDeductionStock(conn redis.Conn) bool {
lua := redis.NewScript(1, LuaScript)
result, err := redis.Int(lua.Do(conn, RemoteSpikeKeys.SpikeOrderHashKey, RemoteSpikeKeys.TotalInventoryKey, RemoteSpikeKeys.QuantityOfOrderKey))
if err != nil { return false }
return result != 0
}Initialization includes setting Redis hash keys: hmset ticket_hash_key "ticket_total_nums" 10000 "ticket_sold_nums" 0 , creating a channel‑based lock, and launching the HTTP handler:
package main
func main() {
http.HandleFunc("/buy/ticket", handleReq)
http.ListenAndServe(":3005", nil)
}The request handler obtains a Redis connection, performs local and remote stock deduction atomically via the channel lock, returns JSON success/failure, and logs the result.
Performance Test – Using ab -n 10000 -c 100 http://127.0.0.1:3005/buy/ticket , the single‑machine prototype processes over 4 000 requests per second with stable latency, confirming the effectiveness of the design.
Conclusion – The article summarizes key takeaways: (1) load balancing distributes traffic to avoid single‑point overload, and (2) leveraging Go’s concurrency model, in‑memory stock handling, and Redis’s atomic operations yields a high‑performance, fault‑tolerant flash‑sale system.
Architecture Digest
Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.