Artificial Intelligence 12 min read

Intelligent Test Execution Practices: Risk‑Based Manual Test Recommendation, Parallel‑Coverage Traffic Filtering, Smart Build, and Priority‑Based Task Scheduling

The article covers intelligent test execution practices: risk‑based manual test recommendation, parallel‑coverage traffic filtering, smart build, priority‑based task scheduling, and UI automation self‑healing, describing methods, algorithms, and results that reduce test volume, speed up regression, and improve stability.

Baidu Geek Talk
Baidu Geek Talk
Baidu Geek Talk
Intelligent Test Execution Practices: Risk‑Based Manual Test Recommendation, Parallel‑Coverage Traffic Filtering, Smart Build, and Priority‑Based Task Scheduling

The previous article introduced the five steps of testing—test input, test execution, test analysis, test localization, and test evaluation—focusing on intelligent test input generation such as abnormal case creation, interface case generation, and action‑set generation. This chapter concentrates on the intelligent practices for the test execution phase.

Intelligent test execution combines data, algorithms, and engineering techniques to improve efficiency and stability. It typically includes test case recommendation, traffic filtering, task scheduling, smart build, and self‑healing execution. Research in academia and industry often uses coverage‑based relevance selection algorithms or data‑driven models.

01. Risk‑Based Manual Test Recommendation

Because code changes and environment factors make it impractical to run all test cases, the goal is to select the most fault‑revealing cases. Simple coverage‑based recommendation can be redundant, so a risk‑based approach is adopted. Code is abstracted into an abstract syntax tree and 21 metrics (e.g., loops, branches) are extracted to quantify design complexity. Machine‑learning models (Bayesian, SVM, KNN, logistic regression, or deep models such as LSTM/DNN) predict defect‑prone code, then map predictions to associated manual test cases. After deduplication and ranking, the recommended cases reduce the recommendation ratio from 50% to 20% and cut regression turnaround from 3 days to 1 day, while increasing the proportion of bug‑finding cases.

02. Parallel‑Coverage Traffic Filtering Scheme

During testing, large volumes of online traffic are used for diff, performance, and stress testing. The challenge is to select a minimal traffic subset that maximally covers test scenarios. Traditional random sampling (by data center, time, etc.) lacks coverage guarantees. The proposed scheme first performs a coarse log‑based filter (by device, region, user attributes) and then applies a greedy algorithm on parallel‑coverage metrics to select the smallest traffic set that covers the most scenarios. This approach has halved traffic volume while preserving or improving coverage by up to 60% across multiple product lines.

03. Smart Build

Smart build dynamically adjusts CI tasks based on change analysis, enabling task pruning, skipping, cancellation, result reuse, self‑healing, and auto‑annotation. Typical scenarios include trivial changes (e.g., log updates) that do not require full regression, repeated execution of the same task due to iterative development, and redundant runs on branch versus trunk. The system extracts change features (changed lines, functions, call‑graph information) and applies policy‑driven decisions: if all features match a whitelist, the task is skipped. Baidu’s implementation has integrated over 3,000 modules into this framework.

04. Priority‑Based Task Scheduling Algorithm

When resources are limited, test tasks must be scheduled to balance stability and fault‑detection capability. A priority queue is built for mobile testing, considering task importance, waiting time, and resource demand. An offline analysis of historical task durations and coverage rates determines optimal convergence points for early stopping. A real‑time decision model monitors execution metrics (duration, screenshot count, UI‑coverage change) to decide whether to stop or continue a task. Experiments show a 10% reduction in execution time without degrading coverage, and a 12% reduction in execution time while increasing coverage by 10%.

05. UI Automation Self‑Healing

Automated app test cases encounter unexpected situations such as upgrade pop‑ups, slow page loads, or changed XPaths, leading to failures and high maintenance cost. Three self‑healing techniques are applied:

Abnormal pop‑up handling using object detection and text‑based classification to identify and dismiss pop‑ups.

Atomic wait technology that leverages visual UI understanding and video frame analysis to distinguish stable from unstable UI states, enabling adaptive waiting.

General case self‑healing that records successful locators (XPath, icons) and retries with historical elements upon failure.

These techniques achieve a 51% self‑healing rate, significantly improving automation stability.

Recommended reading from the "Technical Fuel Station" series:

Behind the Scenes of Baidu Intelligent Testing in Test Generation

Three Stages of Baidu Intelligent Testing

Scaling Baidu Intelligent Testing

task schedulingtest automationtest case recommendationintelligent testingsmart buildUI self-healing
Baidu Geek Talk
Written by

Baidu Geek Talk

Follow us to discover more Baidu tech insights.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.