Ctrip User Center Interface Automation Testing: Architecture, Lifecycle, and Practical Solutions
This article presents Ctrip's user‑center interface automation testing platform, detailing its three‑component architecture, the full test‑task lifecycle, and practical solutions that reduce case‑writing difficulty, improve robustness, integrate manual testing, and enable automatic failure analysis.
The author, Gao Feng, a senior test engineer at Ctrip User Platform R&D, introduces an interface automation testing project created to address agile development challenges such as delayed smoke tests, high manual regression cost, and costly maintenance of automated cases.
The platform consists of three main parts: a no‑code entry platform for writing and debugging test cases, a case parser that separates scripts from data and handles initialization, parameterization, request sending, checkpoint verification, logging, and post‑processing, and a scheduling management platform that assigns execution machines, runs the parser, generates reports, and notifies stakeholders. All test data and scripts are stored in a database, decoupling them from each other.
The test‑task lifecycle is divided into four stages: writing test cases (which can be done by functional testers using the web‑based no‑code platform), initiating tasks (supporting real‑time, scheduled, and Hermes‑triggered smoke tasks), executing tests (where the scheduler dispatches cases to execution agents and the parser performs data initialization, parameter substitution, validation, logging, and post‑processing), and reporting (online reports are compiled, emailed, and can be automatically analyzed, with failed cases able to be re‑run).
Practical solutions are presented in four areas: (1) lowering the difficulty of writing automated cases by offering no‑code automation, case‑copying, and bulk case generation; (2) ensuring case robustness through data parameterization, database/Redis initialization, precase definitions, post‑processing, and built‑in utility APIs for random numbers, UUIDs, signatures, etc.; (3) converting manual testing results directly into automated cases via a real‑time debugging feature; and (4) providing automatic failure analysis to classify non‑functional issues and reduce debugging effort.
Additional enhancements include three case types—O (standard response verification), N (DB‑state verification for create/delete APIs), and compareCase (response comparison between two interfaces or two test versions)—to cover a broader range of testing scenarios.
The article also shows screenshots of the platform’s dashboards, interface management, test‑case authoring page, case query management, online report view, and failure‑analysis view, illustrating the visual aspects of the system.
In conclusion, the user‑center interface automation has been stable for over two and a half years, continuously iterating to meet evolving user needs.
Ctrip Technology
Official Ctrip Technology account, sharing and discussing growth.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.