Fundamentals 27 min read

Performance Testing: Concepts, Scenarios, Tools, and Best Practices

This article explains what performance testing is, when to conduct it, typical scenarios, step‑by‑step execution—including requirements, test design, tool selection, script examples with Locust, result analysis, and key metrics such as throughput, response time, P90, and optimal concurrency.

Snowball Engineer Team
Snowball Engineer Team
Snowball Engineer Team
Performance Testing: Concepts, Scenarios, Tools, and Best Practices

1. What Is Performance Testing

In the era of the Internet of Everything, response speed is critical, and performance testing is the key technique to evaluate it. Performance testing uses automation to quantify or qualify system performance indicators, aiming to identify bottlenecks through load, stress, spike, endurance, and scalability tests.

Load testing : verifies the system’s ability under expected user load before release.

Stress testing : pushes the system to extreme loads to reveal how it handles high traffic and concurrency.

Spike testing : checks the system’s reaction to sudden load spikes.

Endurance testing : assesses long‑duration high‑load execution.

Scalability testing : determines whether the system can dynamically expand under heavy load.

2. When to Conduct Performance Testing

Software testing typically includes unit, integration, system, and acceptance stages. Most companies start performance testing after integration testing, but delaying it makes defect localization costly. Early testing of core business code—especially algorithms—helps avoid hidden bottlenecks in large, distributed or micro‑service architectures.

"The legendary four‑leaf clover represents rare happiness; similarly, performance bottlenecks are rare but hard to find among many normal code paths. Early testing makes them easier to locate."

3. Scenarios Requiring Performance Testing

All interfaces should be tested, but additional critical scenarios include middleware (e.g., Kafka, RabbitMQ) whose performance can become a hidden bottleneck when integrated into a large system.

Even middleware that can handle tens of thousands of TPS may suffer when upstream or downstream connections are insufficient, just like a water tank that can hold enough water but is fed by a pipe that is too narrow.

4. How to Conduct Performance Testing

Performance Requirements

Define clear, numeric performance goals (e.g., response time < 200 ms, TPS > 1000). Requirements should be agreed upon by product, development, testing, and operations teams.

Specify the test type (load, stress, etc.).

Ensure the requirement meets the minimum operational standard.

Provide concrete numbers.

Align all stakeholders.

Test Tools

Common tools are LoadRunner (commercial), JMeter (open‑source, Java‑based), wrk (lightweight C tool), and Locust (Python‑based, coroutine‑driven, supports distributed load).

LoadRunner

JMeter

wrk

Locust

Distributed load

Supported

Supported

Not supported

Supported

Single‑machine concurrency

Low

Low

Low

High

Concurrency model

Process/Thread

Thread

Thread

Coroutine

Language

C/Java

Java

C

Python

Report & analysis

Comprehensive

Simple charts

Simple results

Simple charts

License

Commercial

Open‑source

Open‑source

Open‑source

Locust advantages: low resource consumption, coroutine‑based concurrency, high per‑machine user count, and easy distributed deployment. Drawbacks: no script recorder, no built‑in resource monitoring, and minimal reporting.

Executing Tests with Locust

Install Python 3.9+ and Locust 2.9, then write a script like the following:

from locust import HttpUser, TaskSet, task, between, User

class ReplayAction(TaskSet):
    @task(8)
    def demo(self):
        url = '/S/SH600519'
        headers = {'Content-Type': 'application/json; charset=UTF-8'}
        try:
            http_url = User.host + url
            with self.client.get(http_url, headers=headers, name=url, catch_response=True) as response:
                if response.status_code != 200:
                    response.failure("Failed")
        except Exception as e:
            print("出现异常:%s" % e)

class RePlayer(HttpUser):
    wait_time = between(0, 0)
    host = "https://xueqiu.com"
    tasks = [ReplayAction]

Run the script in headless mode:

locust -f demo.py --headless -u 1000 -r 100 -t 3m

For distributed testing, start a master:

locust -f demo.py --master --headless -u 1000 -r 100 -t 3m --expect-workers=1

and workers:

locust -f demo.py --worker --headless -u 1000 -r 100 -t 3m --master-host=xx.xx.xx.xx

5. Analyzing Performance Results

Key metrics include throughput (TPS), response time, average response time, min/max response time, 90th percentile (P90), concurrent users, maximum concurrent users, and optimal concurrency. P90 helps verify the reliability of the average response time by excluding extreme outliers.

Typical load‑response curves show four regions: light pressure (≤ 50 users), comfortable pressure (50‑100 users – optimal concurrency), heavy pressure (100‑120 users), and severe pressure (> 120 users). The “performance area” (integral of TPS vs. response time) reflects overall system capability, stability, and fault tolerance.

Finally, front‑end performance techniques such as step‑wise loading and pre‑loading can further improve perceived responsiveness.

6. Summary

Performance testing should start early, especially on core business code, and must consider middleware’s upstream/downstream impact. Use metrics like P90 to obtain realistic insights, and choose tools (LoadRunner, JMeter, wrk, Locust) according to project needs.

Performance TestingStress TestingJMeterload testingLocust
Snowball Engineer Team
Written by

Snowball Engineer Team

Proactivity, efficiency, professionalism, and empathy are the core values of the Snowball Engineer Team; curiosity, passion, and sharing of technology drive their continuous progress.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.