Designing a High‑Concurrency Ticket Spike System: Architecture, Load Balancing, and Go Implementation
This article analyzes the 12306 ticket‑spike scenario, presents a distributed high‑concurrency architecture with layered load balancing, compares order‑creation strategies, demonstrates local and remote stock deduction using Go and Redis, and validates performance with ApacheBench testing.
1. Large‑Scale High‑Concurrency System Architecture
High‑concurrency systems are typically deployed as distributed clusters with multiple layers of load balancing and disaster‑recovery mechanisms (dual data centers, node fault tolerance, server backup) to ensure high availability. Traffic is evenly distributed to servers based on capacity and configuration.
1.1 Load Balancing Overview
The diagram shows three layers of load balancing. The three common methods are:
OSPF – an interior gateway protocol that builds a link‑state database and calculates shortest‑path trees; costs are inversely proportional to bandwidth, and equal‑cost paths can be load‑balanced across up to six links.
LVS (Linux Virtual Server) – a cluster technology using IP load balancing and content‑based request distribution, automatically masking server failures.
Nginx – a high‑performance HTTP reverse‑proxy that supports round‑robin, weighted round‑robin, and IP‑hash load balancing.
1.2 Nginx Weighted Round‑Robin Demo
The upstream module implements weighted round‑robin. The following configuration assigns weights 1‑4 to four local services listening on ports 3001‑3004:
# Configure load balancing
upstream load_rule {
server 127.0.0.1:3001 weight=1;
server 127.0.0.1:3002 weight=2;
server 127.0.0.1:3003 weight=3;
server 127.0.0.1:3004 weight=4;
}
server {
listen 80;
server_name load_balance.com www.load_balance.com;
location / {
proxy_pass http://load_rule;
}
}A Go program is used to start four HTTP services on ports 3001‑3004, each logging requests to ./stat.log :
package main
import (
"net/http"
"os"
"strings"
)
func main() {
http.HandleFunc("/buy/ticket", handleReq)
http.ListenAndServe(":3001", nil)
}
// handle request and write log
func handleReq(w http.ResponseWriter, r *http.Request) {
failedMsg := "handle in port:"
writeLog(failedMsg, "./stat.log")
}
// write log helper
func writeLog(msg string, logPath string) {
fd, _ := os.OpenFile(logPath, os.O_RDWR|os.O_CREATE|os.O_APPEND, 0644)
defer fd.Close()
content := strings.Join([]string{msg, "\r\n"}, "3001")
buf := []byte(content)
fd.Write(buf)
}Using ab to generate 1,000 requests with concurrency 100 confirms that the weighted distribution matches the configured weights (100, 200, 300, 400 requests respectively).
2. Spike System Design Choices
When millions of users simultaneously attempt to purchase tickets, the system must guarantee that orders are neither oversold nor undersold, and that each ticket is paid for before becoming valid.
2.1 Order‑First, Stock‑Deduction Later
Creating an order first and then deducting stock ensures atomicity but incurs heavy DB I/O and risks "underselling" if users abandon payment.
2.2 Payment‑First, Stock‑Deduction Later
Deducting stock after payment avoids underselling but can cause "overselling" under extreme concurrency and still suffers from DB I/O bottlenecks.
2.3 Pre‑Deduction (Reserve Stock)
Reserving stock before order creation eliminates frequent DB writes. If a user does not pay within a timeout (e.g., 5 minutes), the reserved stock is released back to the pool. Orders are processed asynchronously via a message queue (e.g., Kafka).
3. The Art of Stock Deduction
Local in‑memory stock deduction combined with a remote Redis‑based unified stock counter provides high performance while preventing oversell.
Local deduction logic (Go):
package localSpike
// LocalDeductionStock returns true if sales volume is still below local stock
func (spike *LocalSpike) LocalDeductionStock() bool {
spike.LocalSalesVolume = spike.LocalSalesVolume + 1
return spike.LocalSalesVolume < spike.LocalInStock
}Remote deduction uses a Lua script executed atomically in Redis:
const LuaScript = `
local ticket_key = KEYS[1]
local ticket_total_key = ARGV[1]
local ticket_sold_key = ARGV[2]
local ticket_total_nums = tonumber(redis.call('HGET', ticket_key, ticket_total_key))
local ticket_sold_nums = tonumber(redis.call('HGET', ticket_key, ticket_sold_key))
if (ticket_total_nums >= ticket_sold_nums) then
return redis.call('HINCRBY', ticket_key, ticket_sold_key, 1)
end
return 0
`
func (r *RemoteSpikeKeys) RemoteDeductionStock(conn redis.Conn) bool {
lua := redis.NewScript(1, LuaScript)
result, err := redis.Int(lua.Do(conn, r.SpikeOrderHashKey, r.TotalInventoryKey, r.QuantityOfOrderKey))
if err != nil {
return false
}
return result != 0
}Initial Redis state (CLI):
hmset ticket_hash_key "ticket_total_nums" 10000 "ticket_sold_nums" 04. Code Demonstration
Server initialization (Go):
package main
func main() {
http.HandleFunc("/buy/ticket", handleReq)
http.ListenAndServe(":3005", nil)
}Request handling combines local and remote deduction, writes a JSON response, and logs the result:
func handleReq(w http.ResponseWriter, r *http.Request) {
redisConn := redisPool.Get()
var logMsg string
<-done // acquire channel lock
if localSpike.LocalDeductionStock() && remoteSpike.RemoteDeductionStock(redisConn) {
util.RespJson(w, 1, "抢票成功", nil)
logMsg = "result:1,localSales:" + strconv.FormatInt(localSpike.LocalSalesVolume, 10)
} else {
util.RespJson(w, -1, "已售罄", nil)
logMsg = "result:0,localSales:" + strconv.FormatInt(localSpike.LocalSalesVolume, 10)
}
done <- 1 // release lock
writeLog(logMsg, "./stat.log")
}4.4 Single‑Node Load Test
Using ab -n 10000 -c 100 http://127.0.0.1:3005/buy/ticket the single‑node service handled over 4,000 requests per second with no failures, confirming the effectiveness of the in‑memory and Redis‑based design.
5. Summary
The spike system demonstrates how to build a high‑concurrency ticketing service by combining layered load balancing, local in‑memory stock reservation, and a Redis‑backed unified counter, thereby avoiding costly database I/O, preventing oversell/undersell, and tolerating partial node failures.
Key takeaways are the importance of load balancing to distribute traffic and the strategic use of concurrency and asynchronous processing to maximize CPU utilization.
Java Captain
Focused on Java technologies: SSM, the Spring ecosystem, microservices, MySQL, MyCat, clustering, distributed systems, middleware, Linux, networking, multithreading; occasionally covers DevOps tools like Jenkins, Nexus, Docker, ELK; shares practical tech insights and is dedicated to full‑stack Java development.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.