Mock + Proxy in SDK Automated Testing: Architecture, Process, and Lessons Learned
This article details the design, evolution, and scaling of a mock‑proxy based automated testing framework for an SDK, covering initial architecture, phase‑two enhancements, distributed execution, performance results, and remaining challenges in test data generation and infrastructure.
The article presents the design and implementation of a mock‑and‑proxy based automated testing framework for an SDK project, outlining the overall workflow and the need to replace manual ad request handling with a fully automated pipeline.
Initially, the system was simple: a button click in the app sent a request to the SDK, which forwarded it to a proxy server; the proxy then interacted with a mock server that generated the response data, eliminating the traditional Fiddler redirection approach.
In the second phase, several problems were solved: internal looping requests removed manual triggers; crash and ANR monitoring was added; failed cases could be rerun automatically; case IDs were embedded in requests to match reporting data; support for multiple ad types and automatic retry of mismatched results were introduced.
As the number of test cases grew to over 3,400, execution time exceeded three hours, prompting a shift to a distributed architecture. The mock server now schedules tasks to specific mobile devices based on fingerprint information, tightly coupling cases with devices and reducing duplicate runs, while configuration, cases, expected results, and execution outcomes were migrated to a database to avoid frequent file I/O and enable cross‑server coordination.
The current setup yields rapid regeneration of cases when error codes change and fully automated execution, but challenges remain, including generating cases for various data types (int, string, date, special characters), handling complex logic (field dependencies, encryption, tokens), supporting custom non‑HTTP protocols, scaling distributed scheduling further, reducing SDK‑app coupling, conducting functional verification, and measuring missed coverage rates.
360 Quality & Efficiency
360 Quality & Efficiency focuses on seamlessly integrating quality and efficiency in R&D, sharing 360’s internal best practices with industry peers to foster collaboration among Chinese enterprises and drive greater efficiency value.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.