Automated Mock for E2E Testing: Design and Implementation of Unmanned MOCK
Unmanned MOCK automatically generates intelligent, context‑aware mock responses for downstream services in end‑to‑end tests by collecting sub‑call data, extracting knowledge, and applying dynamic rules, so failures in downstream systems are isolated, raising test success rates toward near‑100 % without manual mock configuration.
Unmanned MOCK is a solution that uses data collection, knowledge extraction, and containerized mock techniques to automatically generate mocks in end‑to‑end (E2E) testing scenarios.
Background : Execution success rate (excluding bug‑related failures) is a core metric for automation effectiveness. In long‑chain order‑placement E2E cases the success rate hovers around 80%, mainly due to downstream system issues (deployment problems, bugs, etc.). The goal is to raise this rate to near 100% by isolating downstream failures.
The core demand is: when a downstream system crashes, it should not affect the upstream E2E regression. Historically, the maritime industry solved a similar problem with watertight compartments; the article proposes a software analogue—organizational‑structure‑based isolation.
Implementing such isolation requires massive mock configurations across thousands of applications, which is infeasible manually. The challenge is to achieve unmanned intelligent MOCK that can:
Provide custom mock results for the same service interface across different test cases.
Generate distinct mock results for repeated calls within the same test case (e.g., unique order IDs).
Handle inter‑dependent sub‑calls where later calls rely on data produced by earlier ones.
Each sub‑call can be abstracted into two categories: information query and data write. Queries contain a key identifier (usually an ID) in the request and return the same identifier plus newly generated data. Writes return a newly created identifier and associated data.
Example request and response (shown in code blocks):
{"coordTypeEnum":"GCJ02","point":{"lat":3*.**4837,"lng":1**.**057},"storeIds":["10*****102"]}Result:
{"code":"200","data":{"point":{"lat":3*.**4837,"lng":1**.**057},"storeInfos":[{"hitStatus":"NO_STORE","id":"10*****102"}]},"message":"success"}The formula for an intelligent mock result is derived as:
MOCK result = f(request parameters + random identifier + time factor + execution context + historical data)
Variables needed at call time:
Request parameters – captured via function enhancement (vip‑agent capability).
Random identifier – generated dynamically.
Time factor – generated based on the current timestamp.
Execution context – generated dynamically.
Two variables must be prepared beforehand:
Historical data – collected, stored, and supplied to the server during test execution.
f() – the rule that decides how each key is derived (historical value, random generation, or request extraction). This requires extensive data analysis.
To realize these, the system needs capabilities for sub‑call parameter collection, data analysis, and knowledge extraction.
Implementation Plan (Section 02): The solution is split into two phases: data collection & analysis, and anomaly detection & automatic mock.
Data Collection & Analysis : vip‑agent captures sub‑call information (request and response) from E2E cases, reports to vip‑prod, where KBT performs analysis and knowledge extraction, producing dynamic configurations.
Anomaly Detection & Automatic Mock : During execution, vip‑agent monitors sub‑calls in real time. When a downstream exception occurs, the event is reported to KBT, which decides whether to mock. If so, KBT generates target sub‑call info and dynamic config, vip‑prod triggers the mock, and the test proceeds without failure.
The architecture allows the collection and execution domains to overlap, enabling continuous data gathering while automatically mocking on failures.
Mock Implementation : Using a Dubbo container, mock rules are standardized as JSON. Two components are defined:
Rules – control whether a specific sub‑call (traffic + interface filter) should be mocked, managed by a state machine.
Dynamic Configuration – specifies, per test case and sub‑call, the historical result and replacement rules (the f() function).
Dynamic isolation further ties mock decisions to the owning team of the test case and the system, ensuring that only appropriate calls are mocked.
Progress & Planning (Section 03): The core capabilities have been developed and deployed on critical paths, achieving the expected effect. A case study shows an order‑rendering scenario where an exception in alsc‑delivery‑service triggers a rule, and the automatic mock returns the expected result, verified via Arthas monitoring.
The broader impact includes applications in new‑feature testing, generic result verification, and scenario analysis, positioning this infrastructure as a foundation for future reliability engineering.
Author: Rain Qing.
Ele.me Technology
Creating a better life through technology
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.