Quality Metric Model for Automated Testing and Risk Assessment in Commercial Platforms

This article describes Baidu's quality metric model that integrates development process data, self‑testing and automation data to automate test prioritization, estimate project risk, and improve testing efficiency through a six‑component platform covering process control, feature mining, data collection, storage, strategy management, and annotation.

Baidu Intelligent Testing
Baidu Intelligent Testing
Baidu Intelligent Testing
Quality Metric Model for Automated Testing and Risk Assessment in Commercial Platforms

In daily testing activities, teams often face questions such as whether simple code changes need testing, how to control risk for large releases, and how to ensure the effectiveness of automated tests; Baidu Intelligent Testing began researching a quality metric model at the end of 2019 to address these challenges.

The model aims to (1) use development process data, self‑test results, and automation data to decide if further testing is needed, reducing unnecessary effort and shortening test cycles; (2) combine test and development data to estimate risk, boosting confidence in delivery; and (3) intelligently schedule automation tasks for high cost‑performance testing.

After more than a year of research, development, and experimentation, Baidu released a series of articles detailing the model, its application in commercial platforms, and related risk‑assessment algorithms.

Implementation relies on six core capabilities: process control for end‑to‑end workflow and visualisation; feature mining to extract both generic and business‑specific attributes; feature data collection via APIs, agents, configuration and remote hooks; feature data storage and processing with unified data services, lineage and tagging; strategy management for model registration, training, debugging and scheduling; and an annotation platform to provide feedback samples for model retraining.

The technical solution follows an interaction flow where CI pipelines feed feature data into a data platform, the process‑control middle‑platform triggers the quality model, the model evaluates risk and returns a score, and the result is fed back for annotation and continuous model improvement.

Model training treats risk estimation as a binary classification problem, using logistic regression, decision trees, and rule‑based methods on aggregated historical features; inference runs online with real‑time data, producing risk scores and visual reports.

Current results show deployment across more than 20 business lines and 1,000+ services, achieving 94% accuracy, 90% recall, converting about 8% of test cases to autonomous testing, and recalling over 1% of test submissions to catch 30+ bugs.

Future work includes exploring new risk‑assessment algorithms, expanding white‑box analysis for richer features, and advancing toward fully unmanned testing supported by the quality metric model.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

AIsoftware engineeringautomated testingData Platformrisk assessmentquality metric
Baidu Intelligent Testing
Written by

Baidu Intelligent Testing

Welcome to follow.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.