Building a Million‑User Ticket‑Spiking System with Nginx Load Balancing, Redis, and Go

This article explores how to design a high‑concurrency ticket‑spike service inspired by China’s 12306 platform, covering multi‑layer load balancing, local stock pre‑allocation, Redis‑based global inventory control, Go implementation details, and performance testing that demonstrates handling millions of simultaneous requests.

Top Architect
Top Architect
Top Architect
Building a Million‑User Ticket‑Spiking System with Nginx Load Balancing, Redis, and Go

12306 Ticket Spike under Extreme Concurrency

During holidays, millions of users compete for train tickets the moment they become available, creating a scenario where the 12306 service must handle QPS levels that surpass any typical flash‑sale system.

Load Balancing Overview

Large‑scale high‑concurrency systems rely on distributed clusters and multiple layers of load balancing, including OSPF routing, LVS (Linux Virtual Server) IP load balancing, and Nginx reverse proxy.

OSPF

OSPF is an interior gateway protocol that builds a link‑state database, calculates the shortest path tree, and can assign custom Cost values to interfaces for traffic distribution.

LVS

LVS provides IP‑level load balancing and high‑availability virtual servers, automatically masking failed nodes.

Nginx

Nginx is a high‑performance HTTP reverse proxy that supports three load‑balancing methods: round‑robin, weighted round‑robin, and IP‑hash.

Weighted Round‑Robin Configuration

#配置负载均衡
upstream load_rule {
    server 127.0.0.1:3001 weight=1;
    server 127.0.0.1:3002 weight=2;
    server 127.0.0.1:3003 weight=3;
    server 127.0.0.1:3004 weight=4;
}
server {
    listen 80;
    server_name load_balance.com www.load_balance.com;
    location / {
        proxy_pass http://load_rule;
    }
}

Local Stock Deduction

Each server keeps a local inventory pool (e.g., 100 tickets per node). When a request arrives, the server increments its local sales counter and checks whether the counter exceeds the local stock. This operation is fast because it stays in memory.

func (spike *LocalSpike) LocalDeductionStock() bool {
    spike.LocalSalesVolume = spike.LocalSalesVolume + 1
    return spike.LocalSalesVolume < spike.LocalInStock
}

Remote Stock Deduction with Redis

To guarantee global consistency, the same request also executes a Redis Lua script that atomically checks total inventory and increments the sold count if tickets remain.

const LuaScript = `
local ticket_key = KEYS[1]
local ticket_total_key = ARGV[1]
local ticket_sold_key = ARGV[2]
local ticket_total_nums = tonumber(redis.call('HGET', ticket_key, ticket_total_key))
local ticket_sold_nums = tonumber(redis.call('HGET', ticket_key, ticket_sold_key))
if (ticket_total_nums >= ticket_sold_nums) then
    return redis.call('HINCRBY', ticket_key, ticket_sold_key, 1)
end
return 0
`
func (r *RemoteSpikeKeys) RemoteDeductionStock(conn redis.Conn) bool {
    lua := redis.NewScript(1, LuaScript)
    result, err := redis.Int(lua.Do(conn, r.SpikeOrderHashKey, r.TotalInventoryKey, r.QuantityOfOrderKey))
    if err != nil { return false }
    return result != 0
}

System Initialization

The Go program initializes local stock, defines Redis hash keys ("ticket_total_nums" and "ticket_sold_nums"), creates a Redis connection pool, and starts a channel of size 1 to act as a distributed lock.

Request Handling

func handleReq(w http.ResponseWriter, r *http.Request) {
    redisConn := redisPool.Get()
    var LogMsg string
    <-done
    if localSpike.LocalDeductionStock() && remoteSpike.RemoteDeductionStock(redisConn) {
        util.RespJson(w, 1, "抢票成功", nil)
        LogMsg = LogMsg + "result:1,localSales:" + strconv.FormatInt(localSpike.LocalSalesVolume, 10)
    } else {
        util.RespJson(w, -1, "已售罄", nil)
        LogMsg = LogMsg + "result:0,localSales:" + strconv.FormatInt(localSpike.LocalSalesVolume, 10)
    }
    done <- 1
    writeLog(LogMsg, "./stat.log")
}

Performance Test

Using ApacheBench (ab) with 10,000 requests and 100 concurrent connections, the single‑node service achieved about 4,300 requests per second, with an average latency of 23 ms.

ab -n 10000 -c 100 http://127.0.0.1:3005/buy/ticket

Log output shows a smooth transition from successful sales to “sold out” once the combined local and remote inventory is exhausted.

Conclusion and Takeaways

The design demonstrates three key lessons:

Load balancing and divide‑and‑conquer: Distributing traffic across many nodes reduces per‑node load and improves overall throughput.

Effective use of concurrency and async processing: Leveraging Go’s goroutine model, channel‑based locking, and Redis Lua scripts enables lock‑free high‑performance handling of critical sections.

Pre‑allocation of inventory (buffer stock): Local stock reduces database I/O, while a global Redis counter guarantees no overselling and provides fault tolerance when some nodes fail.

By combining Nginx weighted round‑robin, Redis atomic scripts, and Go’s native concurrency, the system can handle millions of simultaneous ticket‑spike requests while ensuring consistency, high availability, and low latency.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

RedisHigh Concurrencyticketing system
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.