Operations 11 min read

Improving Test Efficiency with Interface Testing and Custom Test Tools

This article explains how QA engineers can boost testing efficiency by leveraging interface testing—designing comprehensive API test cases, organizing them into reusable collections, and applying them for requirement verification, data construction, and monitoring—along with developing custom test tools to automate repetitive tasks and improve overall workflow.

转转QA
转转QA
转转QA
Improving Test Efficiency with Interface Testing and Custom Test Tools

Preface

During project testing, how can we make good use of existing platforms and testing tools to improve efficiency, and can we go further? These are common questions for QA. This article introduces two aspects of how I improve testing efficiency.

1. Interface Testing

Last year our team started using an interface platform for API testing. Initially we were confused about how to perform interface testing and where to apply it, but after more than a year the doubts were resolved and many benefits were discovered. Using the spu product detail page project as an example, the following explains how to conduct interface testing.

1.1 What kinds of interface tests are needed

When designing the test plan at the early stage, the project's characteristics determine which interface tests are required. If the project adds many new features, the requirement and technical reviews can identify core process interfaces, which can be tested when designing cases. If many regression scenarios exist, previous interface cases can be reused for regression testing.

1.2 How to design interface test cases

After clarifying the needed interface tests, the details of the cases must be designed, including which tests to perform on a given interface, how to assert responses, and how to organize case collections. For the spu product detail page project, the new interfaces in the spu publishing flow are suitable for requirement testing and regression testing.

1.2.1 Types of tests a single interface can perform

When a UI test requires repetitive operations, interface testing can save time. For exception scenarios that cannot be reproduced on the page, modifying interface parameters enables abnormal testing.

1.2.2 How to assert interface case results

Assertions should go beyond checking success code=0 or non‑empty content; they should verify specific response fields. For example, the “off‑shelf list” API should assert that the list contains a particular spuId, and the product detail API should assert that the sku’s uid matches the publishing spu’s uid, etc.

1.2.3 How to organize interface case collections

For the spu publish, update, off‑shelf, re‑publish/delete flow, placing all related cases in a single collection brings benefits:

Adjusting input parameters allows the collection to cover the entire lifecycle from publishing to deletion.

Interfaces that require an spuId no longer need a separate publish step.

Data‑generating interfaces can be cleaned up by the delete interface, avoiding accumulation of test data.

Based on this idea, the case design flow is shown in the diagram below:

Then modify cases that need the previous case’s output as input; for example, the edit interface uses the skuId returned by the publish interface.

Finally we obtain the following interface test collection:

1.3 How interface cases are applied

Interface cases are not only used by QA for a specific requirement; they also serve requirement testing, data construction, and daily monitoring.

Requirement testing: Interface cases can cover abnormal scenarios hard to reproduce on the UI, enable faster regression detection, and allow developers to run core‑process cases for self‑testing.

Data construction: Cases can act as small data‑construction tools, e.g., adding user tags by calling a tag‑adding API with zzuid and tagId.

Daily monitoring: Core‑process APIs accumulated on the apitest and zapi platforms can be executed by scheduled jobs for health checks.

2. Test Tools

When existing methods do not address pain points, QA can develop custom test tools to assist testing. Developing tools brings two benefits:

Developing tools improves personal business and coding skills.

Using tools saves time during testing.

For example, constructing test scenarios often requires modifying multiple database tables. A custom API can change several tables with a single call, simplifying the process for both experienced and new QA or developers.

Another example addresses missing real‑time merchant data in the test environment for the “store front” and data list pages. By building a data‑construction tool that creates cache keys and values based on a uid, testers can see realistic today’s data in the test environment, enabling early verification of related requirements. The tool can also be used for regression verification when backend statistics fields change or for front‑end page validation after UI changes.

Conclusion

There are many ways to improve testing efficiency; the choice depends on the specific project. Continuously exploring how to apply these methods and solve new problems is essential for future testing work.

Efficiencytest automationinterface testingAPI TestingQAtest tools
转转QA
Written by

转转QA

In the era of knowledge sharing, discover 转转QA from a new perspective.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.