How to Build a Million‑User Ticket‑Snatching System with Nginx, Redis, and Go

This article explains how to design a high‑concurrency ticket‑snatching service that can handle millions of requests by combining multi‑layer load balancing, weighted Nginx round‑robin, in‑memory stock with Redis‑backed global inventory, and Go’s native concurrency, complete with code samples and performance results.

Top Architect
Top Architect
Top Architect
How to Build a Million‑User Ticket‑Snatching System with Nginx, Redis, and Go

Extreme concurrency and ticket‑snatching

During holidays millions of users compete for train tickets; the 12306 service handles millions of QPS, illustrating the need for a robust high‑concurrency architecture.

Large‑scale architecture

High‑concurrency systems typically deploy distributed clusters with multiple layers of load balancing and disaster‑recovery mechanisms.

OSPF – interior gateway protocol that builds a link‑state database and can perform load‑balanced routing based on interface cost.

LVS – Linux Virtual Server provides IP‑level load balancing and automatic failover across a server pool.

Nginx – high‑performance HTTP reverse proxy that supports round‑robin, weighted round‑robin, and IP‑hash algorithms.

Weighted round‑robin in Nginx

upstream load_rule {
    server 127.0.0.1:3001 weight=1;
    server 127.0.0.1:3002 weight=2;
    server 127.0.0.1:3003 weight=3;
    server 127.0.0.1:3004 weight=4;
}
server {
    listen 80;
    server_name load_balance.com www.load_balance.com;
    location / {
        proxy_pass http://load_rule;
    }
}

Stock‑deduction strategies

The article compares three common approaches:

Order‑then‑deduct – create an order and immediately reduce inventory; safe from overselling but incurs heavy DB I/O under extreme load.

Pay‑then‑deduct – wait for payment before reducing stock; reduces DB pressure but can cause overselling when many unpaid orders accumulate.

Pre‑deduct (reserve stock) – reserve inventory first, generate the order asynchronously; avoids DB bottlenecks while preventing both oversell and undersell.

Local stock + Redis central stock

Each server keeps a small amount of tickets in memory (local stock). When a request arrives, the server first checks and updates its local counter. If successful, it then atomically decrements the global count stored in Redis using a Lua script, ensuring consistency across the cluster.

const LuaScript = `
    local ticket_key = KEYS[1]
    local ticket_total_key = ARGV[1]
    local ticket_sold_key = ARGV[2]
    local ticket_total_nums = tonumber(redis.call('HGET', ticket_key, ticket_total_key))
    local ticket_sold_nums = tonumber(redis.call('HGET', ticket_key, ticket_sold_key))
    if(ticket_total_nums >= ticket_sold_nums) then
        return redis.call('HINCRBY', ticket_key, ticket_sold_key, 1)
    end
    return 0
`

Go implementation

The Go program demonstrates the complete flow: initialization, local stock deduction, remote Redis deduction, and HTTP handling.

package main

import (
    "net/http"
    "os"
    "strings"
)

func main() {
    http.HandleFunc("/buy/ticket", handleReq)
    http.ListenAndServe(":3005", nil)
}

Performance test

Using ApacheBench (ab) with 10 000 total requests and 100 concurrent connections, the service achieved approximately 4 300 requests per second, with average latency around 23 ms. Log analysis shows uniform distribution of traffic across the weighted back‑ends, confirming the effectiveness of the load‑balancing and pre‑deduction design.

Conclusion

Combining multi‑layer load balancing, weighted Nginx routing, in‑memory pre‑deduction, and Redis‑backed global inventory enables a ticket‑snatching system to sustain extreme traffic without database bottlenecks. The architecture also tolerates node failures by reserving buffer stock on each server, ensuring both no‑oversell and no‑undersell guarantees.

GitHub repository with the full source code: https://github.com/GuoZhaoran/spikeSystem

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Load BalancingRedisGoHigh ConcurrencyNginxticketing system
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.