Operations 7 min read

Performance Testing and Optimization of a Web Application Using JMeter and BlazeMeter

This article details a comprehensive performance testing workflow—including requirement analysis, script recording with BlazeMeter, data construction, iterative optimization, and final results—targeting a web application that must handle over 5,000 TPS with 2 million database records, highlighting bottlenecks in Redis, MySQL, and code logic.

360 Quality & Efficiency
360 Quality & Efficiency
360 Quality & Efficiency
Performance Testing and Optimization of a Web Application Using JMeter and BlazeMeter

Test Requirements The test aims to simulate a user login flow, extract the browser session, and perform third‑party application redirects, requiring 3‑4 API calls per transaction. The target workload is >5,000 TPS with 2 million records in the database under concurrent user logins.

Test Preparation Several script‑recording tools were evaluated; the Chrome extension BlazeMeter was chosen over outdated BadBoy and JMeter’s proxy recorder. BlazeMeter captures browser actions, exports a JMX file, and allows domain filtering to keep only the relevant server‑side requests.

01 Script Recording Using BlazeMeter, the login, initialization, and application‑list retrieval scripts were recorded (dev.*.360.cn domain). The generated JMX file was filtered to retain only the internal APIs, discarding third‑party calls.

02 Data Construction To simulate many concurrent users, a CSV file with 2,000 pre‑created users was imported into JMeter. The "Same user on each iteration" option was disabled, allowing each thread to use a distinct session. Additional beanshell pre‑processors generated unique usernames for write‑heavy scenarios.

03 Test Optimization Process Initial runs with 1,000 threads yielded < 400 QPS and rising error rates. Server CPU (8‑core, 16 GB) saturated quickly, and Redis/MySQL locks and password verification logic limited throughput. Isolated endpoint tests showed a single‑pod could reach >15,000 QPS, confirming code‑level bottlenecks. Optimizations to the login API raised its QPS from <200 to >1,000. Scaling to multiple pods and using MeterSphere with 10,000 threads across four load generators finally achieved >6,000 QPS, though error rates increased when the target was pushed beyond capacity.

04 Follow‑up After increasing the database size to 2 million rows, the login API sustained >9,000 QPS. Further read/write concurrency tests, using beanshell scripts to avoid data conflicts, consistently delivered >6,000 QPS.

Summary Performance is affected by thread count, data volume, Redis/MySQL configuration, application logic, memory, and CPU. Systematic testing and iterative tuning—monitoring CPU, memory, Redis latency, and error rates—are essential to identify bottlenecks and achieve the desired throughput.

optimizationRedisperformance testingMySQLJMeterload testingblazemeter
360 Quality & Efficiency
Written by

360 Quality & Efficiency

360 Quality & Efficiency focuses on seamlessly integrating quality and efficiency in R&D, sharing 360’s internal best practices with industry peers to foster collaboration among Chinese enterprises and drive greater efficiency value.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.