Backend Development 19 min read

Designing a High‑Concurrency Ticket‑Booking System: Load Balancing, Nginx Weighted Round‑Robin, and Go Implementation

This article presents a complete case study of building a high‑concurrency ticket‑booking service, covering system architecture, three‑layer load balancing (OSPF, LVS, Nginx), Nginx weighted round‑robin configuration, Go‑based request handling, Redis‑backed stock deduction, performance testing with ApacheBench, and practical lessons for preventing overselling and ensuring high availability.

IT Architects Alliance
IT Architects Alliance
IT Architects Alliance
Designing a High‑Concurrency Ticket‑Booking System: Load Balancing, Nginx Weighted Round‑Robin, and Go Implementation

The author examines the extreme concurrency challenges of the 12306 ticket‑booking platform and demonstrates a simulated example where one million users compete for ten thousand tickets, focusing on maintaining stable service through distributed architecture.

Large‑Scale High‑Concurrency Architecture – The system uses a multi‑layered load‑balancing design with OSPF, LVS, and Nginx to distribute traffic across a cluster of servers, ensuring high availability via dual data‑centers and fault‑tolerant nodes.

Load‑Balancing Overview – Three types of load balancing are introduced:

OSPF (Open Shortest Path First) – an interior gateway protocol that calculates shortest paths and can perform load balancing across equal‑cost links.

LVS (Linux Virtual Server) – a cluster technology that provides IP‑level load balancing and hides server failures.

Nginx – a high‑performance HTTP reverse proxy that supports round‑robin, weighted round‑robin, and IP‑hash methods.

Nginx Weighted Round‑Robin Demo

#配置负载均衡
upstream load_rule {
    server 127.0.0.1:3001 weight=1;
    server 127.0.0.1:3002 weight=2;
    server 127.0.0.1:3003 weight=3;
    server 127.0.0.1:3004 weight=4;
}
server {
    listen       80;
    server_name  load_balance.com www.load_balance.com;
    location / {
        proxy_pass http://load_rule;
    }
}

The configuration assigns different weights to four local ports (3001‑3004) and proxies incoming requests accordingly.

Go Service Implementation

package main
import (
    "net/http"
    "os"
    "strings"
)
func main() {
    http.HandleFunc("/buy/ticket", handleReq)
    http.ListenAndServe(":3001", nil)
}
//处理请求函数,根据请求将响应结果信息写入日志
func handleReq(w http.ResponseWriter, r *http.Request) {
    // ... implementation omitted for brevity ...
}
func writeLog(msg string, logPath string) {
    fd, _ := os.OpenFile(logPath, os.O_RDWR|os.O_CREATE|os.O_APPEND, 0644)
    defer fd.Close()
    content := strings.Join([]string{msg, "\r\n"}, "")
    fd.Write([]byte(content))
}

The Go program starts an HTTP server, logs each request, and uses a channel to serialize access, avoiding race conditions.

Local and Remote Stock Deduction

func (spike *LocalSpike) LocalDeductionStock() bool {
    spike.LocalSalesVolume = spike.LocalSalesVolume + 1
    return spike.LocalSalesVolume < spike.LocalInStock
}

const LuaScript = `
local ticket_key = KEYS[1]
local ticket_total_key = ARGV[1]
local ticket_sold_key = ARGV[2]
local ticket_total_nums = tonumber(redis.call('HGET', ticket_key, ticket_total_key))
local ticket_sold_nums = tonumber(redis.call('HGET', ticket_key, ticket_sold_key))
if(ticket_total_nums >= ticket_sold_nums) then
    return redis.call('HINCRBY', ticket_key, ticket_sold_key, 1)
end
return 0
`
func (RemoteSpikeKeys *RemoteSpikeKeys) RemoteDeductionStock(conn redis.Conn) bool {
    lua := redis.NewScript(1, LuaScript)
    result, err := redis.Int(lua.Do(conn, RemoteSpikeKeys.SpikeOrderHashKey, RemoteSpikeKeys.TotalInventoryKey, RemoteSpikeKeys.QuantityOfOrderKey))
    if err != nil {
        return false
    }
    return result != 0
}

Local deduction updates an in‑memory counter, while remote deduction uses a Redis Lua script to atomically decrement the shared inventory.

Performance Testing

ab -n 10000 -c 100 http://127.0.0.1:3005/buy/ticket

The ApacheBench test shows the single‑machine service handling over 4,000 requests per second with low latency, and the log confirms that request distribution matches the configured Nginx weights.

Conclusion

The case study demonstrates that by combining multi‑layer load balancing, weighted Nginx routing, in‑memory stock handling, and Redis for atomic remote updates, a ticket‑seckill system can achieve high throughput, avoid overselling, tolerate partial node failures, and maintain consistent performance under extreme concurrency.

load balancingRedisGohigh concurrencyNginxticketing system
IT Architects Alliance
Written by

IT Architects Alliance

Discussion and exchange on system, internet, large‑scale distributed, high‑availability, and high‑performance architectures, as well as big data, machine learning, AI, and architecture adjustments with internet technologies. Includes real‑world large‑scale architecture case studies. Open to architects who have ideas and enjoy sharing.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.