How a Scalable Automation Framework Revolutionized Marketing API Testing
This article details the evolution of a marketing‑focused automation framework—from a siloed "chimney" architecture to a modular, middle‑platform design and finally a stable, extensible system—highlighting challenges, design choices, code generation, CI/CD integration, and measurable efficiency gains.
Introduction
"The art of architecture design lies in finding the simplest solution to meet business needs." — John Gall
Mutation : meaning "mutation", indicating the framework’s ability to adapt to constantly changing marketing interfaces and system requirements with high flexibility.
Background and Challenges
Interface automation is crucial for quality assurance, improving test efficiency and reducing labor costs. Marketing activities are time‑sensitive, highly flexible, and prone to frequent changes, making rapid, comprehensive testing essential to avoid financial loss.
1. V1.0 Chimney Era – Building the Basic Automation Framework
1.1 Challenges
Early development prioritized speed, resulting in a chimney‑style architecture with duplicated custom code, cumbersome configuration, and high maintenance overhead.
1.2 Problems to Solve
Rapid response and verification : quickly validate numerous marketing scenarios. Flexible adaptation : handle frequent changes without breaking tests. Data‑driven optimization : structure marketing data for reuse and management.
1.3 Design Highlights
Collaboration-friendly : unified format and component/flow standards. High reusability : supports multiple tools and can generate test data, core scenario verification, etc. Plug‑in architecture : supports http, soa, kafka, mysql with extensible slots. Template & parameterization : JSON/YAML templating, inheritance, and variable substitution. Strong flexibility : multiple data extraction methods, format conversion, built‑in expression engine (Aviator, BeanShell). CI/CD integration : can be embedded in pipelines for rapid iteration.
1.4 Framework Details
1.4.1 Layered Design
The framework is divided into component, business‑flow, and test‑case layers to improve maintainability and readability.
{
"componentName": "Just test-genorderids",
"url": "${trade4showcase.tradeplatformUrl}/order",
"operation": "POST",
"request": {
"headers": {},
"queryParams": {},
"cookies": {},
"bodyType": "JSON",
"body": {
"method": "genOrderIds",
"jsonrpc": 1,
"id": 1,
"params": [${userId}, ${genOrderIdsRequest}]
}
}
}Component implementation example:
@Override
@LLRequestSpec(value = "classpath:json/trade4showcase/createOrder.json")
public void createOrder(LLContext context) {
execute(context);
}Non‑API component example:
public interface PreposeService {
void externalFileReplace(LLContext context);
void getSSOToken(LLContext context);
void getCurrentEnv(LLContext context);
void triggerJob(LLContext context);
}Business‑flow example (order process):
@Override
public void showcaseOrder(LLContext context) {
preposeService.getCurrentEnv(context);
preposeService.getSSOToken(context);
preposeService.externalFileReplace(context);
tradePlatform4ShowCaseService.genOrderIds(context);
tradePlatform4ShowCaseService.createOrder(context);
tradePlatform4ShowCaseService.enableOrder(context);
tradePlatform4ShowCaseService.queryOrderById(context);
}Test case example with data provider and assertions:
@Test(dataProvider = "data4ShowCaseTestSync", dataProviderClass = TradePlatform4ShowCaseDataProvider.class)
public void testOrderWithDataProvider(String userId) {
LLContext context = new LLContext();
context.getInput().put("userId", userId);
context.getReplaceMap().put("genOrderIdsRequest", "JSON.FILE:json/trade4showcase/genOrderIds_request.json");
order4ShowCaseFlow.showcaseOrder(context);
String orderInfo = (String) context.getOutput().get("queryOrderById_result");
String orderId = (String) context.getOutput().get("genOrderIds_orderId");
String actualOrderId = JsonPathUtils.getResult(orderInfo, "$.result.data.orderId");
String actualUserId = JsonPathUtils.getResult(orderInfo, "$.result.data.buyerId");
assertThat(actualOrderId).as("Validate order ID").isEqualTo(orderId);
assertThat(actualUserId).as("Validate user ID").isEqualTo(userId);
}Factory layer provides data‑driven utilities such as order ID retrieval from the database.
1.4.2 Modularity
Templates abstract component definitions; the same structure can represent HTTP requests, Kafka messages, or SQL statements.
{
"componentName": "Just test kafka",
"url": "this is kafka topic",
"operation": "produce",
"request": {
"async": true,
"records": [{"key": "hello world", "value": "hello kitty"}]
}
}Parameterization is achieved via @LLRequestSpec annotation, allowing context variables to replace placeholders in JSON files.
2. V2.0 Middle‑Platform Era – Upgrading the Framework to Reduce Costs
2.1 Challenges
Middle‑platform abstraction introduces shared services, but changes to these services can break many tests, requiring a more flexible and maintainable automation solution.
2.2 Problems to Solve
Functional abstraction : unify common marketing components. Execution chain optimization : shorten long, tightly‑coupled test flows. Data freshness & cost control : generate fresh test data automatically. Code generation cost : auto‑generate component code from specifications.
2.3 Design Characteristics
Strategy‑based classification of common business models, data‑pool construction for fresh test data, automatic code generation for all tested objects, and component‑level capability sinking to replace hard‑coded flow assembly.
2.4 Architecture Details
2.4.1 Strategy Model Structure
Flow layer defines MarketingStrategy interface with create, filter, and doSomeThing methods.
Component layer implements business‑domain interfaces.
Test layer follows three steps: data preparation, flow invocation, result storage and assertion.
2.4.2 Data Freshness
Data pools are built daily based on defined rules, sourcing from data factories, big data, or traffic replay to ensure reliable test data.
2.4.3 Code Generation Components
Automatic generation of component code from LDoc interface metadata supports HTTP, SOA, and RESTful services.
Configuration example (YAML):
ops-user-discount-svc:
genApiType: "methods"
genApiTypeValue: "/?name=act_strategy&version=get_settings,subsidy/strategy/getOrderSubsidy@"
jsonPath: "../mutation-core/src/main/resources/json/intelligentoperation/subsidy"
classPath: "../mutation-core/src/main/java/cn/huolala/mutation/component/intelligentoperation/subsidy"
className: "TestLu"Execution via shell script:
sh run.sh ops-user-discount-svc3. V3.0 Stable Era – Extending the Framework for Emergency & Loss Prevention
3.1 Business Characteristics & Challenges
Rapid business growth increases complexity and the risk of financial loss. Quick detection and verification in production are critical.
3.2 Problems to Solve
Automation safety in production (preventing data corruption).
Rapid problem localization (sub‑minute verification).
Stability of test environments (avoid token expiry, script errors).
3.3 Design Features
Assertion tracing components automatically collect failure reasons and provide traceability.
Custom annotation example for collecting test metadata:
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface CustomAnnotation {
String actId() default "";
String taskId() default "";
String descption() default "";
String remark() default "";
String author() default "";
}Assertion collection extracts class, method, author, parameters, and trace ID, then pushes messages to Feishu.
@Override
public void onFinish(ISuite suite) {
String groupName = systemConfiguration.getGroupName();
if (StringUtils.isBlank(groupName)) {
groupName = DEFAULT_GROUP_NAME;
}
String failCaseInfos = "";
if (ObjectUtils.isNotEmpty(concurrentHashMap.get(groupName))) {
failCaseInfos = ((StringBuffer) concurrentHashMap.get(groupName)).toString();
}
try {
feishuNoticeFlow.sendCaseFailInfo(failCaseInfos, groupName);
} catch (Exception e) {
e.printStackTrace();
} finally {
concurrentHashMap.put(groupName, "");
}
}Security checks ensure production automation only operates on whitelisted user/driver IDs.
public static Boolean defaultJudge(String body, String operation) {
if (isInWhiteString(OPERATION_WHITE, operation)) {
return whiteJudge(body, USER_WHITE, "userId") ||
whiteJudge(body, DRIVER_WHITE, "driverId") ||
whiteJudge(body, EP_WHITE, "epId");
}
return false;
}3.4 Benefits
Debugging cost reduced: failure analysis time from 5 minutes to 30 seconds.
Production emergency verification speed increased 30‑fold (1 hour → 2 minutes).
Loss‑prevention validation accelerated 15‑fold (30 minutes → 2 minutes).
4. Summary & Reflections
Interface automation testing provides fast, repeatable verification of API correctness, stability, and coverage, but still faces challenges such as data richness and script maintenance. Combining automation with traffic replay and AI/ML techniques (e.g., voice and image testing, contract testing) can further enhance test realism and efficiency.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
