Backend Development 18 min read

Designing a High‑Concurrency Ticket Spike System: Architecture, Load Balancing, and Go Implementation

This article explores the design of a high‑concurrency train‑ticket flash‑sale system, covering distributed load‑balancing strategies, Nginx weighted round‑robin configuration, local and remote stock deduction using Go and Redis, performance testing with ApacheBench, and key architectural lessons for preventing overselling and ensuring high availability.

IT Architects Alliance
IT Architects Alliance
IT Architects Alliance
Designing a High‑Concurrency Ticket Spike System: Architecture, Load Balancing, and Go Implementation

When millions of users simultaneously try to purchase limited train tickets, the 12306 service faces extreme QPS that exceeds typical e‑commerce spikes. To keep the system stable, the author analyzes the service architecture and demonstrates a prototype that can handle 1 million concurrent users buying 10 000 tickets.

Load‑balancing overview

Three layers of load‑balancing are introduced: OSPF routing, LVS (Linux Virtual Server), and Nginx. Nginx’s weighted round‑robin method is highlighted, and a sample configuration is shown:

upstream load_rule {
    server 127.0.0.1:3001 weight=1;
    server 127.0.0.1:3002 weight=2;
    server 127.0.0.1:3003 weight=3;
    server 127.0.0.1:3004 weight=4;
}
server {
    listen 80;
    server_name load_balance.com www.load_balance.com;
    location / {
        proxy_pass http://load_rule;
    }
}

Local stock deduction (Go)

func (spike *LocalSpike) LocalDeductionStock() bool {
    spike.LocalSalesVolume++
    return spike.LocalSalesVolume < spike.LocalInStock
}

The Go service also defines HTTP handlers that log each request and respond with success or sold‑out messages.

Remote stock deduction with Redis

const LuaScript = `
local ticket_key = KEYS[1]
local total_key = ARGV[1]
local sold_key  = ARGV[2]
local total = tonumber(redis.call('HGET', ticket_key, total_key))
local sold  = tonumber(redis.call('HGET', ticket_key, sold_key))
if total >= sold then
    return redis.call('HINCRBY', ticket_key, sold_key, 1)
end
return 0
`
func (keys *RemoteSpikeKeys) RemoteDeductionStock(conn redis.Conn) bool {
    lua := redis.NewScript(1, LuaScript)
    result, err := redis.Int(lua.Do(conn, keys.SpikeOrderHashKey, keys.TotalInventoryKey, keys.QuantityOfOrderKey))
    if err != nil { return false }
    return result != 0
}

Before starting, the Redis hash is initialized with total tickets (e.g., 10000) and sold count 0.

Performance testing

Using ApacheBench (ab -n 10000 -c 100) against the Go service, the test reports ~4 300 requests per second with an average latency of 23 ms, confirming that a single node can handle several thousand QPS and that the weighted load‑balancing distributes traffic as expected.

Key takeaways

Distribute traffic with multi‑level load balancers (OSPF → LVS → Nginx) to avoid single‑point overload.

Perform stock deduction locally in memory to eliminate database I/O, then synchronize with Redis for global consistency.

Use Go’s lightweight goroutines and channel‑based locking to serialize critical sections without heavy mutexes.

Reserve buffer inventory on each node to tolerate server failures while preventing overselling.

The article concludes that combining efficient load‑balancing, in‑memory stock management, and Redis atomic operations yields a robust, high‑throughput ticket‑spike system capable of handling extreme concurrency without overselling or significant downtime.

distributed architectureload balancingRedisGohigh concurrencyNginxticketing system
IT Architects Alliance
Written by

IT Architects Alliance

Discussion and exchange on system, internet, large‑scale distributed, high‑availability, and high‑performance architectures, as well as big data, machine learning, AI, and architecture adjustments with internet technologies. Includes real‑world large‑scale architecture case studies. Open to architects who have ideas and enjoy sharing.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.