Design and Implementation of a High‑Concurrency Ticket Seckill System Using Go, Nginx, and Redis
This article explains how to build a high‑concurrency train‑ticket flash‑sale system that can handle millions of requests by combining layered load‑balancing, Nginx weighted round‑robin, in‑memory stock deduction, Redis atomic Lua scripts, and a Go HTTP service with channel‑based concurrency control, and it provides performance test results and source code.
During peak travel periods, millions of users compete for train tickets, creating a massive QPS load similar to a global flash‑sale system. The author analyzes the 12306 architecture and proposes a design that can serve 1 million users buying 10 000 tickets while keeping the service stable.
1. System Architecture – A distributed cluster with three layers of load‑balancing (OSPF, LVS, Nginx) distributes traffic to dozens of backend servers. Each server holds a portion of the total inventory locally and also participates in a unified stock reduction via Redis.
2. Load‑Balancing Details
OSPF calculates shortest paths, LVS provides IP virtual server failover, and Nginx performs weighted round‑robin distribution. Example Nginx upstream configuration:
#配置负载均衡
upstream load_rule {
server 127.0.0.1:3001 weight=1;
server 127.0.0.1:3002 weight=2;
server 127.0.0.1:3003 weight=3;
server 127.0.0.1:3004 weight=4;
}
server {
listen 80;
server_name load_balance.com;
location / {
proxy_pass http://load_rule;
}
}3. Stock Deduction Strategies – Three approaches are compared: (a) create order then deduct stock, (b) deduct after payment, and (c) pre‑deduct stock with asynchronous order creation. The pre‑deduction method is chosen to avoid heavy DB I/O.
Local stock is kept in memory; when a request arrives, the server increments a local sales counter and checks against its local inventory. If successful, it also executes a Redis Lua script to atomically update the global stock:
const LuaScript = `
local ticket_key = KEYS[1]
local ticket_total_key = ARGV[1]
local ticket_sold_key = ARGV[2]
local ticket_total_nums = tonumber(redis.call('HGET', ticket_key, ticket_total_key))
local ticket_sold_nums = tonumber(redis.call('HGET', ticket_key, ticket_sold_key))
if(ticket_total_nums >= ticket_sold_nums) then
return redis.call('HINCRBY', ticket_key, ticket_sold_key, 1)
end
return 0
`4. Go Service Implementation – The Go program defines LocalSpike and RemoteSpikeKeys structs, initializes a Redis connection pool, and uses a single‑element channel as a distributed lock. The request handler performs local deduction, then remote deduction, and writes the result to a log file.
package main
import (
"net/http"
"os"
"strings"
)
func main() {
http.HandleFunc("/buy/ticket", handleReq)
http.ListenAndServe(":3005", nil)
}
func handleReq(w http.ResponseWriter, r *http.Request) {
// acquire lock via channel, perform deductions, respond JSON
}5. Performance Testing – Using ab -n 10000 -c 100 http://127.0.0.1:3005/buy/ticket , the single‑node service achieved over 4 000 requests per second with 0% failure, confirming that the design can handle high QPS without overselling.
6. Conclusion – By combining layered load‑balancing, in‑memory stock management, atomic Redis operations, and Go’s native concurrency, the system avoids database bottlenecks, guarantees no oversell or undersell, and tolerates node failures through buffered inventory.
Architect's Tech Stack
Java backend, microservices, distributed systems, containerized programming, and more.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.