Backend Development 22 min read

Design and Implementation of an Automated Backend Interface Testing System

The article presents a language‑agnostic, low‑maintenance automated backend interface testing system that unifies HTTP and RPC calls, uses a parameter pool, JSON Schema and JSONPath for assertions, generates test cases from live traffic, measures coverage, and continuously updates suites to achieve near‑full coverage.

Tencent Cloud Developer
Tencent Cloud Developer
Tencent Cloud Developer
Design and Implementation of an Automated Backend Interface Testing System

The article introduces an automated testing system aimed at solving common pain points faced by backend developers, such as recurring bugs, version‑compatibility issues, and the high cost of maintaining test cases. The proposed solution achieves near‑100% test‑case coverage with low management overhead.

Background and Motivation

Backend services often suffer from repeated bugs, complex version‑compatibility logic, and insufficient monitoring. Existing tools (e.g., JMeter, Postman, internal WeTest) are either too heavyweight or do not meet specific business requirements, prompting the need for a self‑built solution.

System Goals

The system should be language‑agnostic, support no‑code test‑case authoring, enable scenario testing with variable sharing, provide flexible scheduling (full, module‑level, group‑level, or single‑case), handle both HTTP and RPC calls, and minimize the effort required by backend engineers.

Overall Architecture

The architecture unifies HTTP and RPC access via a common description format (e.g., http://host:port/urlpath + reqbody and rpc://ip:port/method + reqbody ). Test cases are assigned to modules; the proxy layer routes requests to the appropriate service implementation. Scheduling is driven by an asynchronous MQ that supports multiple trigger sources, granular dispatch, and environment selection.

Parameter Pool and Variable Sharing

To handle multi‑step interface chains, a parameter pool is introduced. Each pool entry is a key‑value pair where the value can be a literal or a JSONPointer path extracted from a previous response. This decouples variable dependencies from linear “pass‑through” flows, allowing any case to reference any pool variable.

JSON Schema Component

Instead of writing field‑by‑field assertions, the system uses JSON Schema to describe the expected response structure. Example schema for a bookInfo object:

{
  "type": "object",
  "required": ["bookId", "title", "author", "cover", "format", "price"],
  "properties": {
    "bookId": {"type": "string", "const": "123456"},
    "title": {"type": "string", "minLength": 1},
    "author": {"type": "string", "minLength": 1},
    "cover": {"type": "string", "format": "uri"},
    "format": {"type": "string", "enum": ["epub", "txt", "pdf", "mobi"]},
    "price": {"type": "number", "exclusiveMinimum": 0}
  }
}

This approach offers high readability, language‑independence, and the ability to generate schemas automatically from sample JSON.

JSONPath Component

JSONPath complements JSON Schema by enabling relational assertions that Schema cannot express, such as $.updateTime > $.createTime or checking that an array is non‑empty ( $.datas.length > [0] ).

Coverage Measurement

The article defines a coverage metric: global coverage = covered interfaces / total interfaces * 100% and interface coverage = covered valid cases / total valid cases * 100% . Valid cases are derived from enumerated parameter values and their combinatorial possibilities, identified through online traffic analysis.

Automatic Test‑Case Generation

By analyzing live traffic, the system extracts parameter enumerations (e.g., listType=[1,2,3,4] , platform=[android,ios,web] ) and determines which parameters are enumerable. It then generates all feasible combinations, producing thousands of test cases (e.g., 8000+ cases for read APIs) and continuously updates schemas based on new traffic.

Case Optimization and Promotion

When a test fails, the system can automatically promote the case by either removing it, replacing it with a fresh request captured from traffic, or adjusting the JSON Schema to reflect legitimate changes (e.g., making a field optional).

Discovery and Completion

Two offline tasks detect new interfaces (new traffic patterns) and new test cases (previously uncovered parameter combinations). Detected items are fed back into the generation pipeline, ensuring the test suite stays up‑to‑date.

Conclusion

The solution integrates JSON Schema, JSONPath, a parameter‑pool mechanism, and a flexible MQ‑driven scheduler to build a language‑agnostic, low‑maintenance automated testing platform for backend services. It demonstrates a practical method for measuring and improving test coverage using online traffic analysis.

JSON SchemaBackend DevelopmentAutomated TestingJsonPathtest coverageMQ schedulingparameter pool
Tencent Cloud Developer
Written by

Tencent Cloud Developer

Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.