Optimization Strategies for High‑Concurrency Ticketing Systems
The article analyzes the challenges of high‑traffic ticketing platforms, compares business models, identifies concurrency bottlenecks, and presents comprehensive front‑end and back‑end optimization techniques—including load balancing, caching, data partitioning, and queue‑based flow control—to achieve horizontal scalability and reliable performance.
1. Business Complexity Comparison Different business models are examined: QQ accesses only its own data; flash‑sale systems accept only the first N requests; Olympic ticket sales use registration and lottery rather than first‑come‑first‑serve; e‑commerce C2C focuses on its own inventory. The conclusion is that inventory management is the nightmare for B2C scenarios, similar to the 12306 railway ticketing system.
2. Bottlenecks The typical inventory workflow involves reserving stock, payment, and stock deduction, requiring data locking. Ensuring data consistency under massive concurrency is extremely difficult. For 12306, ticket releases cause tens of millions of visits within minutes, with peak traffic reaching up to 1 billion page views, making consistency a critical pain point.
3. Front‑End Optimizations Recommendations include: (1) Load balancing via DNS and CDN; (2) Reducing the number of page resources by merging JavaScript, CSS, and icons; (3) Compressing assets and separating image services to lower bandwidth usage; (4) Page static‑generation, possibly storing static files in memory (e.g., /dev/shm); (5) Simplifying query results to a simple “available / not available” flag; (6) Front‑end caching of dynamic pages.
4. Back‑End Optimizations Suggested measures are: (1) Data redundancy across multiple tables at the cost of consistency; (2) Data replication (mirroring) which still faces consistency issues; (3) Data partitioning by database, table, or field; (4) Load balancing through static and dynamic traffic splitting; (5) Asynchrony, throttling, and batch processing to smooth spikes.
5. Overall Summary The system must support horizontal scaling; adding more machines is the primary way to improve performance.
Additional Proposals – Cloudwind’s Queue Theory The core idea is to treat ticket acquisition like a restaurant queue: generate signed tokens (“numbers”) for users, store them in a large circular buffer, hash the token to its index, and allow entry only when the token reaches the head, issuing a signed session for successful users.
Key Considerations for the Queue System (1) Token abuse is prevented; (2) Excessive query frequency can invalidate tokens to limit load; (3) Sessions have limited lifetimes, requiring re‑queueing if not used.
Final Thoughts on the Queue Approach Once a session is obtained, the ticket‑purchase flow is no longer the bottleneck; invalid or timed‑out sessions can be discarded immediately, and the token‑based queue precisely controls system traffic while being a high‑performance in‑memory operation.
Cao Zheng’s Optimized Blog Solution Highlights a massive traffic scenario (10 billion PV per day for train‑search, tens of millions for login, millions for ordering). The main optimization directions are: (1) KV‑style storage using Redis for linear queries; (2) Backend result caching for train data, remaining tickets, and availability flags; (3) Front‑end caching with anti‑scraping measures; (4) I/O optimization for order processing.
Core Takeaway Caching (static‑izing query results) is the centerpiece of the optimization, suitable when query frequency far exceeds update frequency and all users request identical data simultaneously.
References The article cites Yang Jian’s blog “Website Acceleration – Cache Is King” for further reading on caching strategies.
Qunar Tech Salon
Qunar Tech Salon is a learning and exchange platform for Qunar engineers and industry peers. We share cutting-edge technology trends and topics, providing a free platform for mid-to-senior technical professionals to exchange and learn.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.