Operations 6 min read

Performance Testing of 360 Endpoint Security Management System with LoadRunner

This report details the performance testing of the 360 Endpoint Security Management System, covering product overview, architecture, tool selection (favoring LoadRunner), test scenarios, script implementation, execution results, resource utilization, and analysis, concluding with insights on testing methodology and optimization.

360 Quality & Efficiency
360 Quality & Efficiency
360 Quality & Efficiency
Performance Testing of 360 Endpoint Security Management System with LoadRunner

The 360 Endpoint Security Management System is a comprehensive solution for large enterprises, integrating antivirus protection and endpoint security control, supporting detection of known viruses, unknown malicious code, APT attacks, and providing functions such as asset management, patch management, security operation, network access control, mobile storage management, and security auditing.

The system architecture is illustrated in the accompanying diagram.

Various performance testing tools were evaluated, including LoadRunner (LR), JMeter, Tsung, Locust, and hardware traffic generators. After comparison, the LR11 version of LoadRunner was selected for its stability and robust reporting capabilities.

Testing objectives include interface TPS testing, comprehensive scenario testing, and stability testing (long‑run and HA verification). TPS testing stresses application interfaces under continuous load, scenario testing simulates realistic traffic patterns based on production data, and stability testing assesses long‑duration performance and load‑balancer/HA behavior.

Test code, partially shown in the figures, simulates login and business request flows.

Three test scripts were configured: a heartbeat script triggered every 10 seconds, a script for all other interfaces triggered every 10 minutes, and a key‑business script that runs frequently on a small set of terminals.

The execution simulated 10 000 terminals: Script A (heartbeat) fires every 10 seconds with one request, Script B fires every 10 minutes with one request, and Script C fires every 30 minutes with ten requests, representing normal and peak load conditions.

Results tables present transaction response times, success rates, and resource usage (CPU, memory, I/O) captured from the LR1, LR2, and LR3 runs.

Monitoring analysis shows a 100 % transaction success rate, average response time of 1 second, CPU usage peaking at 82 % (average 25 %), stable memory and read I/O, while write I/O spikes due to database operations, indicating room for optimization. Overall, the regression test passed.

The conclusion emphasizes that tool selection aids convenience, but the essential factor is aligning the testing approach with product business requirements, understanding architecture and data flows, and preparing realistic test data to closely mimic production environments.

JMeterload testingsystem monitoringLoadRunnerEndpoint Security
360 Quality & Efficiency
Written by

360 Quality & Efficiency

360 Quality & Efficiency focuses on seamlessly integrating quality and efficiency in R&D, sharing 360’s internal best practices with industry peers to foster collaboration among Chinese enterprises and drive greater efficiency value.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.