Design and Implementation of an Internal Mock Platform for Efficient Development and Testing
The article analyzes common pain points in daily development testing such as manual data preparation, backend dependency, and unstable UI automation, then evaluates existing API‑mock tools like Apifox before presenting a custom mock platform that decouples frontend from backend, supports encrypted data, selective mocking, fault simulation, and provides a visual interface for managing mock rules and recordings.
In routine feature development, developers often lack dedicated test interfaces, leading to manual database edits, delayed frontend testing when backend APIs are unavailable, and unstable UI automation due to data volatility.
The author identifies four recurring problems: cumbersome data preparation, backend‑API readiness, difficulty reproducing special‑case data during fault drills, and the need to constantly maintain automation scenarios.
After surveying ready‑made services, the team notes that tools like Apifox offer Swagger‑based API management and mock capabilities, but they also have drawbacks such as difficulty handling encrypted data and limited selective mocking.
To address these gaps, a proprietary mock platform was built with the following core capabilities:
Decouples frontend development from backend services, allowing parallel development using mock data.
Provides stable mock data for UI automation, improving script reliability.
Enables recording of backend requests to trace multi‑service parameter flows.
Supports simulation of abnormal scenarios (e.g., network failures, error codes, latency) for fault‑driven testing.
The platform’s implementation follows three steps: (1) domain name redirection forwards requests to the mock server; (2) the mock server resolves the address, forwards the request to the real server, and captures the response; (3) based on a type flag, the response is either stored and returned or replaced with pre‑configured mock data.
Visual interfaces allow users to view recorded traffic, edit response payloads, status codes, and delays, and toggle mocking per endpoint. The underlying database schema stores interface metadata, including the type field that indicates whether mocking is required.
Typical usage workflow for developers includes creating a feature domain, accessing the .mock sub‑domain, performing business operations, then inspecting and editing captured API data in the mock console before re‑accessing the mocked domain to receive the customized responses.
Real‑world case studies demonstrate the platform’s value: (1) enforcing weekly usage limits for personal accounts in UI tests; (2) validating that new experimental APIs do not affect core rendering paths by forcing error responses; (3) simulating missing backend data during fault‑drill scenarios to verify graceful degradation.
Future improvements aim to support HTTPS mocking for pre‑release environments and to fully automate decryption of encrypted payloads.
Continuous Delivery 2.0
Tech and case studies on organizational management, team management, and engineering efficiency
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.