Building a High‑Concurrency Train Ticket Spike System with Nginx Load Balancing and Redis

This article explains how to design a high‑concurrency ticket‑spike service that can handle millions of requests by using multi‑layer load balancing, Nginx weighted round‑robin, local stock caching, and atomic Redis operations, complete with Go code examples and performance testing.

Java Backend Technology
Java Backend Technology
Java Backend Technology
Building a High‑Concurrency Train Ticket Spike System with Nginx Load Balancing and Redis

Background

During holidays many people try to buy train tickets, causing massive concurrent requests to the 12306 service, which must handle millions of QPS.

System Architecture Overview

The design uses three layers of load balancing—OSPF, LVS, and Nginx—to distribute traffic across a cluster of servers.

Load‑Balancing Methods

OSPF – interior gateway protocol that calculates cost based on bandwidth.

LVS – IP virtual server that balances traffic and masks node failures.

Nginx – HTTP reverse proxy supporting round‑robin, weighted round‑robin, and IP‑hash.

Weighted Round‑Robin Example

upstream load_rule {
    server 127.0.0.1:3001 weight=1;
    server 127.0.0.1:3002 weight=2;
    server 127.0.0.1:3003 weight=3;
    server 127.0.0.1:3004 weight=4;
}
server {
    listen 80;
    server_name load_balance.com www.load_balance.com;
    location / {
        proxy_pass http://load_rule;
    }
}

Ticket‑Spiking Logic

Three stages are order creation, inventory deduction, and payment. Creating the order first puts heavy pressure on the database and allows malicious users to reserve stock without paying. Pre‑deduction (reserve inventory) reduces DB I/O and prevents oversell.

Local Stock Deduction

Each node keeps a local stock counter; a request increments the sales counter and succeeds only if the counter stays below the local inventory.

func (spike *LocalSpike) LocalDeductionStock() bool {
    spike.LocalSalesVolume++
    return spike.LocalSalesVolume < spike.LocalInStock
}

Remote Stock Deduction with Redis

A Redis hash stores total inventory and sold count. A Lua script atomically checks the remaining stock and increments the sold count.

local ticket_key = KEYS[1]
local total_key = ARGV[1]
local sold_key = ARGV[2]
local total = tonumber(redis.call('HGET', ticket_key, total_key))
local sold = tonumber(redis.call('HGET', ticket_key, sold_key))
if total >= sold then
    return redis.call('HINCRBY', ticket_key, sold_key, 1)
end
return 0

Service Initialization

The program initializes local inventory, creates a Redis connection pool, and sets up a size‑1 channel used as a binary semaphore to provide a lightweight distributed lock.

HTTP Handler

func handleReq(w http.ResponseWriter, r *http.Request) {
    redisConn := redisPool.Get()
    if localSpike.LocalDeductionStock() && remoteSpike.RemoteDeductionStock(redisConn) {
        util.RespJson(w, 1, "抢票成功", nil)
    } else {
        util.RespJson(w, -1, "已售罄", nil)
    }
}

Performance Test

An ApacheBench run with 10 000 requests and 100 concurrent workers on a single Mac processes about 4 300 requests per second, showing that the design avoids heavy database I/O.

Conclusions

Load balancing spreads traffic, local stock caching eliminates most DB operations, Redis provides fast atomic stock checks, and a small buffer per node tolerates server failures while preventing both oversell and undersell.

System architecture diagram
System architecture diagram
Redissystem designHigh ConcurrencyNginxticketing system
Java Backend Technology
Written by

Java Backend Technology

Focus on Java-related technologies: SSM, Spring ecosystem, microservices, MySQL, MyCat, clustering, distributed systems, middleware, Linux, networking, multithreading. Occasionally cover DevOps tools like Jenkins, Nexus, Docker, and ELK. Also share technical insights from time to time, committed to Java full-stack development!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.