Trip.com Train Ticket Globalization Architecture Evolution and Practices
This article presents a comprehensive case study of Trip.com’s train ticket service global expansion, detailing the business background, challenges such as multi‑region deployment, performance, data compliance and scalability, and the step‑by‑step architectural evolution across region selection, network, data, infrastructure, and business layers to achieve a resilient, low‑latency, and compliant worldwide service.
Author Bio
py.an, Ctrip Backend R&D Manager, focuses on performance optimization and technical architecture. venson, Ctrip Senior Backend R&D Manager, focuses on performance optimization and technical architecture.
1. Introduction
Against the backdrop of a global strategy, Trip.com, as an international OTA platform, is accelerating its globalization deployment. The train ticket business is investing resources and technology to expand overseas, deploying applications and data in Singapore and Frankfurt to improve user experience and reduce data‑compliance risks.
2. Business Background
The global railway business of Trip.com Train Ticket is currently concentrated in the UK, Asia, and Europe, with Europe being a major focus due to its advanced economy and transportation network. Post‑COVID, travel demand is rebounding, and multi‑language, multi‑currency support has enabled a growing global footprint.
3. Challenges
Global deployment must satisfy application availability, user performance, data security, legal compliance, and data isolation. Specific challenges include:
3.1 Global Deployment
Before transformation, the service operated in a single IDC with dual‑active sites (IDC A+B). The new architecture moves to a region‑level, multi‑center model with separate logical partitions, user partitions, near‑edge access, and strict cross‑border data policies.
Disaster‑Recovery Level
Same Logical Partition
User Partition
Near‑Edge Access
Data Multi‑Active
Public Access
Pre‑migration (dual‑active)
Cross‑IDC
Yes
No
No
Fully supported, mature
Global Multi‑Center
Region level
No
Yes (unit‑based)
Yes (must follow cross‑border policy)
Support multi‑IDC scenarios
Multi‑IDC introduces data sharding, unitization, conflict, and idempotency issues, requiring major adjustments to both application and PaaS infrastructure.
3.2 Performance Issues
Cross‑ocean network latency and long transmission paths degrade user experience; optimizing routing and reducing latency are essential.
3.3 Data Compliance and Regulation
Strict adherence to regional data‑cross‑border laws and security regulations is required.
3.4 Data Offshore Issues
Data consistency across multiple IDC read/write scenarios.
Compliance with cross‑border policies that often prohibit multi‑active deployments.
3.5 Global Scalability
Dynamic data‑storage strategies must adapt to evolving compliance policies while allowing rapid business expansion.
4. Architecture Evolution Practices
4.1 Region (Availability Zone) Selection
Factors considered include user demand, legal/privacy requirements, infrastructure, network quality, cross‑border risk, and cost‑benefit analysis. Trip.com selected Singapore (SIN) and Frankfurt (FRA) as data‑center regions for the train ticket service.
4.2 Network Access Layer
Three routing scenarios are defined:
External Network : Multi‑path, near‑edge routing to minimize latency.
Internal Network : Prefer intra‑region resource access to close the loop.
Cross‑Region Access : When resources are unavailable within a region, optimized links (e.g., Europe users to FRA then to SIN via dedicated lines) avoid long‑haul cross‑ocean traffic.
4.3 Data Layer
1) Data Offshore Compliance Refactoring – Classify and tag sensitive data, encrypt or anonymize it, and perform data‑splitting to satisfy local regulations.
2) Multi‑IDC DB Deployment – Deploy databases across multiple IDC sites, ensuring synchronization respects regional legal constraints.
3) Sync Latency Monitoring – Monitor DRC sync latency (e.g., SIN↔FRA: 160 ms+).
4) DB Multi‑IDC Scalability – Introduce a RegionCode field to route queries to the appropriate region, enabling unit‑based processing and dynamic compliance adjustments.
4.4 Infrastructure Component Layer
1) PaaS Multi‑IDC Integration
Distributed configuration center with region‑specific config files.
Distributed scheduling center that shards jobs by RegionCode across IDC.
Redis clusters isolated per IDC; no bidirectional sync, ensuring intra‑unit closure.
2) Message Center Multi‑IDC Refactoring – Logical grouping of MQ clusters per region, cross‑region synchronization via a base subject, and idempotent consumption based on RegionCode .
4.5 Project Business Layer
1) Business Unit Closure Refactoring – Partition users by region so each IDC can operate independently.
2) Request‑Chain Refactoring – Keep processing within the same region to avoid cross‑ocean latency.
3) Cross‑Region Scenario Refactoring – Convert serial external calls to asynchronous, pre‑fetch, or reduce cross‑region hops; non‑critical cross‑region tasks are handled asynchronously via MQ.
4.6 Issues Encountered and Evolution Thoughts
Key problems and solutions include:
DB Sync Conflict : Network instability caused DRC conflicts; resolved by restricting order updates to a single IDC and moving towards unit‑based design.
Distributed Lock Limitation : Redis‑based locks operate at region level; global locks would require a performance trade‑off.
Multi‑IDC Inventory Management : Strategies involve pre‑allocating inventory per IDC, dynamic reallocation, and real‑time monitoring/alerts.
4.7 Evolution Results
The architecture evolution yielded:
Improved system diagram (see image).
Performance gains: optimized network paths reduced FRA latency by 300‑800 ms.
5. New Starting Point, New Journey
5.1 Unit‑Based Routing
Adopt the Group’s UCS (Unit Control Service) routing strategy, using user region as a sharding key to map requests to the appropriate IDC.
5.2 Data Unitization Refactoring
Maintain full‑copy bidirectional sync for fault tolerance while enabling the ability to cut off sync links for compliance, achieving true data unitization.
5.3 Business Central IDC Adjustment
Plan to shift the central IDC to Singapore, eventually decommissioning the original central IDC, to meet evolving compliance and business needs.
5.4 Conclusion
Trip.com’s train ticket service is progressing toward a robust global architecture, yet further work is needed to fully address complex multi‑region scenarios, data compliance, and performance challenges.
Ctrip Technology
Official Ctrip Technology account, sharing and discussing growth.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.