How a One‑Stop Performance Testing Platform Boosted Efficiency by Up to 80%
An internal one‑stop performance testing platform was built to streamline workflow control, task management, and automated baselines, cutting test cycles by up to 80%, supporting shared resources, real‑time monitoring and future extensions for middleware automation and intelligent alerts.
Introduction
A one‑stop performance testing platform was developed on top of open‑source tools to meet the company’s needs, providing workflow control, test task management, and automated performance baselines for R&D engineers.
Background
Time cost: each test required ad‑hoc provisioning of load generators, increasing both time and monetary costs; multiple environments made results incomparable.
Data cost: scripts, test data, machines, and reports were not retained.
Risk from uncontrolled production testing.
Lack of a unified platform for automated performance baselines.
Due to these factors and growing testing demands, the team decided to build a platform for performance testing.
Business Architecture
Key Feature Overview
Production Testing Workflow Control Module
Enables review, notification, and transparent results for production testing.
Testing Management Module
Aggregates scripts, parameter files, and report data.
During testing, the platform can integrate with the company’s monitoring system to view metrics of the target machines.
Online Script Editing Module
Extracts frequently changing parameters during test execution and supports real‑time modification of common parameters.
Dynamic adjustments allow:
Changing concurrency and duration on the fly.
Randomizing parameter files.
Controlling load rates and ratios per sampler.
Supporting gradient load modes.
Adjusting test machines as needed.
Shared Testing Machine Module
Supports both K8s nodes and fixed virtual machines, allowing on‑demand addition and removal of load generators.
Node addition illustration:
Automated Performance Baseline Module
In agile development, short cycles make detailed performance testing impractical; a baseline allows comparison of performance across iterations to determine if goals are met.
Baseline results are divided into three data sets:
Application metric data.
Machine load performance data.
Queue metrics for asynchronous interfaces.
Examples of visualizations:
List threshold comparison:
Machine load chart:
Asynchronous queue metrics (red overlay indicates async task name):
Automatic or manual triggering of baselines pushes results to agile team members; an added feature automatically files performance bugs when regressions are detected.
Challenges Encountered and Solutions
High concurrency load testing strains load generators; solution: shared machine module with dynamic K8s and VM resources.
Data generation for automated baselines is time‑consuming; solution: integrate with traffic platform to auto‑generate scripts and feed them into the testing platform.
Real‑time monitoring requirements; solution: integrate platform UI with the company’s Tetris monitoring system, allowing users to add custom monitoring panels.
Support for custom JAR packages; solution: copy JARs to a temporary directory before execution and delete them afterward to avoid class‑loader conflicts.
Platform Benefits
Single test cycle time reduced by ~30%; repeat tests reduced by ~80%.
Since launch, 260 test scenarios have produced ~2,500 valid reports.
Daily page views visualized (image below).
Unattended automated baseline testing now covers ~80% of core applications, automatically surfacing potential performance issues.
Production testing follows defined procedures, reducing the risk of online incidents.
Future Plans
Automated middleware performance testing.
Elastic scaling of the platform.
Intelligent analysis and alerting.
Overall goal: improve testing efficiency and output.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
