Product Management 10 min read

Applying AB Testing in Ctrip Flight Booking: Process, Data Flow, and Analysis

The article explains how Ctrip’s flight‑booking team uses AB testing—from definition and experimental design to data collection, traffic allocation, orthogonal experiments, and result analysis—to drive conversion‑rate and revenue improvements across multiple platforms.

Ctrip Technology
Ctrip Technology
Ctrip Technology
Applying AB Testing in Ctrip Flight Booking: Process, Data Flow, and Analysis

This article introduces the author, Li Ning, a senior data and product manager at Ctrip, and outlines the evolution of AB testing (referred to as ABT) within the company, highlighting its critical role in decision‑making and KPI evaluation.

Definition of AB Testing : AB testing is described as a controlled variable method derived from physics, used to isolate the impact of a single factor on conversion rate (CR) or revenue, ensuring statistical and practical significance.

AB Testing Process and Data Flow : When an app starts, a common framework fetches all active AB test IDs and versions, storing them locally. Upon user interaction (e.g., clicking a round‑trip search), the client retrieves the corresponding experiment version and triggers a trace event ( o_abtest_expresult ) that records client code, session ID, page view ID, experiment ID, and version. ETL pipelines aggregate these events into an AB experiment table for downstream analysis.

Traffic Allocation : Devices generate a hash from device ID, experiment ID, and a random number; the hash modulo 100 determines the bucket (e.g., 0‑9 for version A, 10‑79 for version B, etc.). Version A serves as the default fallback when the hash is invalid.

Experiment Orthogonality : Non‑orthogonal experiments share traffic and limit the number of concurrent tests, while orthogonal experiments randomize traffic completely, allowing multiple experiments on the same page. Experience suggests keeping simultaneous orthogonal experiments below seven to avoid hard‑to‑debug anomalies.

Point‑Removal Mechanism : After an AB test ends, code related to the experiment is removed and traffic is ramped to 100 % to ensure the old version is fully retired and the app size remains optimal.

Data Analysis : The goal of AB testing is to prove that the new version outperforms the old one. Analysts examine time‑series charts of CR and revenue, applying the “focus on the large, ignore the small” principle to filter out noisy data points. When charts are inconclusive, metrics are decomposed (e.g., per‑UV revenue, CR) to pinpoint the source of change.

Core Metrics : Conversion rate and revenue are the primary KPIs. Projects are prioritized based on ROI, which may be driven by CR improvement, revenue uplift, or both. If an experiment fails to meet expectations, the team investigates page‑level CR drops or profit declines, using SQL queries and business knowledge to identify root causes.

Conclusion : Effective AB testing relies on clear problem definition, rigorous experimental design, and systematic data analysis. Mastering these fundamentals enables product managers to independently resolve the majority of issues and accelerate product iteration.

AB testingdata analysisproduct managementexperiment designconversion rate
Ctrip Technology
Written by

Ctrip Technology

Official Ctrip Technology account, sharing and discussing growth.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.