How to Build a Generic API Robustness Scanning System for Automated Test Case Generation
This article presents a comprehensive, automated solution for API robustness testing that extracts baseline cases, generates exhaustive parameter‑level test data, executes them at scale, and analyzes results to identify abnormal responses without manual effort, thereby improving testing efficiency and software quality.
Background
Rapid business growth has led to an explosion of services, interfaces, and parameters, making exhaustive manual testing impractical. Testers face limited time, high effort for parameter‑level cases, and difficulty covering edge values such as maximum lengths or special values.
Problem Statement
Relying solely on manual testing would incur huge labor costs and time consumption. The goal is to automate the construction of test data and partially automate testing to reduce manual effort.
Solution Overview
The proposed generic interface robustness scanning solution establishes universal rules for test case generation, handles boundary and special values, fully automates case creation, execution, and result analysis, and can be extended with product‑specific rules.
System Design
The testing workflow consists of four core steps:
Data source acquisition and preprocessing from platforms such as gateway and traffic replay.
Construction of rule models and case‑generation algorithms, including built‑in and user‑defined rules.
Execution of generated cases against services.
Result analysis using rule‑based evaluation to flag problematic cases.
Data Source Parsing
Baseline cases are derived from gateway and replay platforms. To avoid overwhelming the system, data is pulled every 30 minutes with retry logic on failure. Parsed data yields application, service, and parameter metadata, which is persisted for later use. Additional platforms can be integrated to enrich the baseline.
Case Generation Algorithm Model
Two primary rule types are demonstrated:
NULL rule : removes a field to test null handling.
EMPTY rule : similar to NULL but for empty values.
Special‑value rules (e.g., Integer.MAX_VALUE, String.maxLength("AAA.........")) replace parameter values to create edge‑case tests. Custom user rules can be added by implementing a generateCases method.
public List<Cases> caseGenerate(){
for(AlgorithmModel algorithmModel : algorithmModels){
baseCase = getBaseCase();
caseTmpList = algorithmModel.generateCases();
cases.addAll(caseTmpList);
}
return cases;
}Case Execution
Generated cases are executed every 30 minutes, offset by 30 minutes from data‑pull tasks to avoid conflicts. Execution uses multi‑task parallelism; failed calls trigger a compensation retry mechanism to ensure every case completes.
Result Analysis
Two main error‑detection rules are applied:
Unreasonable error messages: non‑technical, ambiguous messages are flagged based on predefined language patterns.
Backend execution anomalies: responses that contradict expected success/failure status are identified using a curated rule set derived from thousands of real results.
Rules are continuously refined with new data and require strict development guidelines to maintain consistency.
Summary and Outlook
The solution automates baseline case acquisition, generates comprehensive parameter‑level test data, executes tests at scale, and performs rule‑based result analysis, significantly reducing manual effort and improving interface robustness. Future work includes incorporating business‑specific variations, enhancing rule precision with larger data sets, detecting false‑positive success responses, and continuously enriching baseline cases.
Youzan Coder
Official Youzan tech channel, delivering technical insights and occasional daily updates from the Youzan tech team.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
