Designing a High‑Concurrency Ticket Spike System with Nginx Load Balancing, Redis, and Go
This article explores the architecture and implementation of a high‑concurrency ticket‑spike system, covering load‑balancing strategies, Nginx weighted round‑robin configuration, local stock deduction in Go, remote stock control with Redis Lua scripts, and performance testing results.
During holiday periods, users face the challenge of抢火车票 (snatching train tickets) due to massive concurrent requests, with the 12306 service handling millions of QPS. The article analyzes how to design a system that can serve 1 million users simultaneously while ensuring normal, stable service.
Load‑Balancing Overview
The traffic passes through three layers of load balancers: OSPF (an interior gateway protocol), LVS (Linux Virtual Server), and Nginx. Each layer distributes requests across multiple servers, providing high availability and fault tolerance.
Nginx Weighted Round‑Robin
The Nginx upstream configuration assigns weights to backend servers (e.g., ports 3001‑3004 with weights 1‑4) to reflect their processing capacity:
#配置负载均衡
upstream load_rule {
server 127.0.0.1:3001 weight=1;
server 127.0.0.1:3002 weight=2;
server 127.0.0.1:3003 weight=3;
server 127.0.0.1:3004 weight=4;
}
server {
listen 80;
server_name load_balance.com www.load_balance.com;
location / {
proxy_pass http://load_rule;
}
}This ensures traffic is proportionally distributed according to server capabilities.
Local Stock Deduction (Go)
Four Go HTTP services listen on ports 3001‑3004. Each service increments a local sales counter and checks against a predefined local inventory:
func (spike *LocalSpike) LocalDeductionStock() bool {
spike.LocalSalesVolume = spike.LocalSalesVolume + 1
return spike.LocalSalesVolume < spike.LocalInStock
}If the local check passes, the request proceeds to remote stock deduction.
Remote Stock Deduction with Redis
A Redis hash stores total tickets and sold tickets. A Lua script atomically checks availability and increments the sold count:
local ticket_key = KEYS[1]
local ticket_total_key = ARGV[1]
local ticket_sold_key = ARGV[2]
local ticket_total_nums = tonumber(redis.call('HGET', ticket_key, ticket_total_key))
local ticket_sold_nums = tonumber(redis.call('HGET', ticket_key, ticket_sold_key))
if (ticket_total_nums >= ticket_sold_nums) then
return redis.call('HINCRBY', ticket_key, ticket_sold_key, 1)
end
return 0The Go function executes this script via redis.NewScript and returns a boolean indicating success.
Initialization
Before starting services, the Redis hash is seeded:
hmset ticket_hash_key "ticket_total_nums" 10000 "ticket_sold_nums" 0Each service also initializes its local inventory and a channel‑based lock to serialize stock updates.
Request Handling
The HTTP handler performs both local and remote deductions atomically; on success it returns JSON {code:1, msg:"抢票成功"}, otherwise {code:-1, msg:"已售罄"}. The result is logged to ./stat.log for later analysis.
Performance Testing
Using ApacheBench ( ab -n 10000 -c 100 http://127.0.0.1:3005/buy/ticket ) the single‑machine setup achieved ~4,300 requests per second with average latency ~23 ms, demonstrating that the design can handle high QPS without database bottlenecks.
Conclusions
The combination of load‑balancing, in‑memory local stock, Redis‑backed global stock, and asynchronous order creation provides a scalable solution that avoids DB I/O, prevents overselling, tolerates node failures via buffer inventory, and fully utilizes multi‑core servers.
Java Architect Essentials
Committed to sharing quality articles and tutorials to help Java programmers progress from junior to mid-level to senior architect. We curate high-quality learning resources, interview questions, videos, and projects from across the internet to help you systematically improve your Java architecture skills. Follow and reply '1024' to get Java programming resources. Learn together, grow together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.