Backend Development 19 min read

Designing a High‑Concurrency Ticket‑Seckill System: Architecture, Load Balancing, and Go Implementation

The article analyzes the extreme‑traffic challenges of the 12306 ticket‑seckill service, presents a layered load‑balancing architecture, compares inventory‑deduction strategies, and demonstrates a complete Go‑based prototype with Nginx weighted round‑robin, Redis stock management, and ApacheBench performance testing.

Java Architect Essentials
Java Architect Essentials
Java Architect Essentials
Designing a High‑Concurrency Ticket‑Seckill System: Architecture, Load Balancing, and Go Implementation

This article examines the massive concurrency problem faced by the 12306 ticket‑seckill system during holidays, where millions of users simultaneously attempt to purchase a limited number of train tickets.

It first outlines a typical high‑concurrency architecture that uses distributed clusters, multiple layers of load balancers (OSPF, LVS, Nginx), and disaster‑recovery mechanisms to achieve high availability.

1. Load‑Balancing Overview – The three common load‑balancing methods are introduced: OSPF (cost‑based routing), LVS (IP‑level clustering), and Nginx (HTTP reverse proxy) with weighted round‑robin configuration.

1.2 Nginx Weighted Round‑Robin Demo

# Configuration of weighted upstream in Nginx
upstream load_rule {
    server 127.0.0.1:3001 weight=1;
    server 127.0.0.1:3002 weight=2;
    server 127.0.0.1:3003 weight=3;
    server 127.0.0.1:3004 weight=4;
}
server {
    listen 80;
    server_name load_balance.com www.load_balance.com;
    location / {
        proxy_pass http://load_rule;
    }
}

Four local Go services listen on ports 3001‑3004, each with a different weight, to verify that traffic is distributed according to the configured weights.

2. Seckill System Design Choices

The article compares three inventory‑deduction strategies: (2.1) create‑order‑then‑deduct, (2.2) deduct‑after‑payment, and (2.3) pre‑deduction with asynchronous order creation, concluding that pre‑deduction with a buffer stock is the most efficient for high‑traffic scenarios.

3. Stock‑Deduction Techniques

Local in‑memory stock reduction is described, followed by a remote Redis‑based atomic deduction using a Lua script.

local ticket_key = KEYS[1]
local ticket_total_key = ARGV[1]
local ticket_sold_key = ARGV[2]
local ticket_total_nums = tonumber(redis.call('HGET', ticket_key, ticket_total_key))
local ticket_sold_nums = tonumber(redis.call('HGET', ticket_key, ticket_sold_key))
if (ticket_total_nums >= ticket_sold_nums) then
    return redis.call('HINCRBY', ticket_key, ticket_sold_key, 1)
end
return 0

Go code for local stock handling:

package localSpike

// LocalDeductionStock returns true if the sale can be made
func (spike *LocalSpike) LocalDeductionStock() bool {
    spike.LocalSalesVolume = spike.LocalSalesVolume + 1
    return spike.LocalSalesVolume < spike.LocalInStock
}

Go code for remote Redis deduction:

package remoteSpike

func (keys *RemoteSpikeKeys) RemoteDeductionStock(conn redis.Conn) bool {
    lua := redis.NewScript(1, LuaScript)
    result, err := redis.Int(lua.Do(conn, keys.SpikeOrderHashKey, keys.TotalInventoryKey, keys.QuantityOfOrderKey))
    if err != nil {
        return false
    }
    return result != 0
}

Initialization of Redis stock (executed once before the service starts):

HMSET ticket_hash_key "ticket_total_nums" 10000 "ticket_sold_nums" 0

The HTTP handler combines local and remote deductions, returns JSON success or sold‑out messages, and logs each request to ./stat.log .

func handleReq(w http.ResponseWriter, r *http.Request) {
    redisConn := redisPool.Get()
    <-done
    if localSpike.LocalDeductionStock() && remoteSpike.RemoteDeductionStock(redisConn) {
        util.RespJson(w, 1, "抢票成功", nil)
    } else {
        util.RespJson(w, -1, "已售罄", nil)
    }
    done <- 1
    writeLog(...)
}

Performance testing with ApacheBench (ab) shows the single‑machine prototype handling over 4,000 requests per second with stable latency, and the log confirms correct stock accounting.

5. Summary

The prototype demonstrates how to build a high‑throughput ticket‑seckill service by combining Nginx weighted load balancing, in‑memory stock caching, Redis atomic operations, and Go’s native concurrency, while avoiding database bottlenecks and providing fault tolerance through buffer stock.

distributed systemsLoad BalancingRedisGohigh concurrencynginxticket seckill
Java Architect Essentials
Written by

Java Architect Essentials

Committed to sharing quality articles and tutorials to help Java programmers progress from junior to mid-level to senior architect. We curate high-quality learning resources, interview questions, videos, and projects from across the internet to help you systematically improve your Java architecture skills. Follow and reply '1024' to get Java programming resources. Learn together, grow together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.