Operations 9 min read

Cut Regression Testing to Minutes with Lightweight Interception and Spock Tests

This article outlines how a team tackled long‑standing automation gaps in custom output modules and second‑party packages by introducing a lightweight interception verification system and a Groovy‑Spock data‑driven testing framework, achieving full‑coverage regression in minutes, reducing bug detection time, and boosting overall testing efficiency.

Qunhe Technology Quality Tech
Qunhe Technology Quality Tech
Qunhe Technology Quality Tech
Cut Regression Testing to Minutes with Lightweight Interception and Spock Tests

1. Problem Background and Challenges

Custom design tools contain several modules that cannot be automated, leading to a lack of quality assurance. The main issues are:

1. Custom output module testing difficulty

The custom business output module generates large JSON payloads (over 5 MB) for downstream consumption. The size makes test data preparation hard, automation scripts slow and labor‑intensive, and the interface unsuitable for direct testing. Teams have relied on downstream interfaces for indirect testing, causing delays and long debugging chains.

2. Second‑party package quality assurance difficulty

The team maintains some second‑party packages (e.g., lod packages) that require sharding logic before returning data to validation stages. Although the service does not call these packages directly, it must ensure their quality. Version releases are constrained by downstream testing schedules, and manual verification cannot cover complex sharding logic. The team therefore explored automated testing solutions.

Both problems share the commonality that business capabilities are exposed to other teams, and existing automation coverage is insufficient, necessitating a self‑contained automation testing framework.

2. Solution

The team designed two targeted solutions to break the automation bottlenecks.

Solution 1: Lightweight Interception Verification for Custom Output Module

The custom output module’s parameters field holds the main data. Instead of using the heavyweight SOA interface, the team leveraged a downstream interface that sends a minimal payload (only the plan ID). By intercepting the downstream request, processing it, and returning the data, the team could test the custom output logic efficiently. Test cases add a special request header with a hunterid (trace ID) and a flag indicating a custom‑output test. The intercepted data is packaged into JSON and uploaded to COS (cloud storage). The test platform then retrieves the JSON from COS, compares it with the expected result, and logs any discrepancies.

All test cases are defined in the testing platform, which adds the hunterid header and triggers the interception flow, enabling full‑scene coverage without heavy payloads.

Solution 2: Spock Data‑Driven Testing for Second‑Party Packages

To automate second‑party package validation, the team adopted a Groovy + Spock + RestAssured stack. Test data are read from external files, transformed into the required input structures, and fed to the package’s client methods. The returned data are converted to JSON and compared against expected JSON files. This data‑driven approach overcomes limitations of the existing Apollo platform, which requires manual JSON expectation updates. The framework also supports one‑click switching of expected JSON via configuration, simplifying test case maintenance.

Full coverage of design post‑processing and custom output test cases is maintained, as shown in the diagrams.

3. Implementation Effects and Business Value

Maintained over 20 automated regression cases for custom output and design post‑processing, plus 58 SOA automation cases; each run completes in roughly 3 minutes.

Automation now covers all related business within each iteration, allowing rapid updates of expected values.

Since automation rollout, regression has been executed 20 times over six months, with a bug detection rate stabilizing around 20%.

4. Conclusion

Testing autonomy: eliminated reliance on external teams and gained end‑to‑end verification capability.

Efficiency multiplied: regression time reduced from hours to minutes.

Quality improved: automated bug discovery rate increased to 20%.

For modules that are hard to automate, actively seek best practices from other teams and involve developers to assist testing; many testing challenges can then be resolved.

software testingtest automationBackend testingcontinuous integrationGroovySpock
Qunhe Technology Quality Tech
Written by

Qunhe Technology Quality Tech

Kujiale Technology Quality

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.