Case Study: Scaling Ticketing System with TiDB at Tongcheng
Facing rapid traffic growth and MySQL limitations, Tongcheng migrated its ticketing order database to a sharded MySQL cluster combined with TiDB, achieving transparent sharding, real‑time synchronization, high‑availability monitoring, and seamless handling of billions of rows and peak QPS over 10,000.
When Tongcheng first learned about TiDB from chief architect Wang Xiaobo, the company was transitioning its development and databases to open‑source solutions. Their online ticketing business generated massive data volumes and query loads that MySQL could not satisfy, prompting the creation of an in‑house sharding middleware (DBrouter) which still struggled with post‑sharding aggregation, real‑time statistics, and full‑data monitoring.
In late 2016, the ticketing order database faced increasing pressure due to a surge in traffic before the National Day holiday. New requirements such as minute‑level order monitoring added many complex queries, while the total size of the order database approached several terabytes. The decision was made to shard the order database to reduce single‑node load.
After evaluating the existing sharding solution, Tongcheng found that a small number of complex, full‑table scans consumed over 80% of I/O, degrading overall performance. The chief architect suggested trying TiDB. With cooperation between the middleware team and DBAs, TiDB was used as a unified data store for complex queries, while the sharding cluster handled simple queries. Because TiDB is highly compatible with MySQL protocol, Tongcheng extended PingCAP’s Syncer tool to customize database and table names, adding monitoring of TPS, latency, and WeChat alerts for anomalies, ensuring real‑time synchronization from MySQL to TiDB.
Stress tests showed that the combined sharding + TiDB solution met both functional and performance requirements. The architecture was quickly adjusted, consolidating thousands of MySQL shards into a single TiDB cluster, which successfully handled the 2016 National Day traffic peak—twice the normal load—without incidents.
The real‑time sync and query system architecture is illustrated below:
Figure 1: System Architecture Diagram
Following the successful deployment, Tongcheng deepened its TiDB usage, deploying various monitoring solutions recommended by PingCAP.
Figure 2: Grafana Monitoring Dashboard – TiDB
Figure 3: Grafana Monitoring Dashboard – TiKV
To improve observability, TiDB’s alarm system was integrated with the company’s monitoring and self‑healing platforms, enabling automatic detection and remediation of anomalies.
After the initial rollout, Tongcheng quickly migrated its flight‑ticket business to TiDB. At the time of writing, the company operates several TiDB clusters across nearly a hundred servers, storing tens of terabytes of data; the largest cluster holds over ten terabytes and more than ten billion rows, serving billions of daily accesses with average QPS of 5,000 and peak QPS exceeding 10,000.
Because TiDB is MySQL‑compatible and supports standard SQL, it has become a primary database solution for new projects, with ongoing collaboration and feedback loops with PingCAP engineers.
Looking ahead, Tongcheng plans to adopt TiDB DBaaS based on Kubernetes (TiDB‑Operator) to automate deployment, scaling, and failover, reducing operational overhead. They also intend to explore TiSpark for real‑time analytics and data warehousing.
Figure 4: TiDB DBaaS Solution
Source: https://zhuanlan.zhihu.com/p/35602651
Architecture Digest
Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.