Using Locust for HTTP Request Statistics: Real‑time Monitoring, CSV Export, Custom Metrics, and Analysis
This guide explains how to leverage Locust's built‑in statistics and reporting features to monitor HTTP requests in real time via its web UI, export results as CSV files, customize metric collection in scripts, and analyze the data with external tools.
Locust automatically records all HTTP requests made through self.client , capturing response times, success rates, and other key metrics, and provides several ways to view and analyze these statistics.
1. Real‑time monitoring with Locust's web interface – When Locust starts, it launches a web UI (default http://localhost:8089 ) that displays live statistics such as Requests Per Second (RPS), average response time, number of failed requests, and per‑endpoint performance metrics. To start Locust and open the UI, run:
locust -f your_locustfile.pyThen open the URL in a browser, set the number of users and spawn rate, and click “Start swarming” to begin the test.
2. Exporting test results to CSV files – Besides live monitoring, Locust can export results to CSV for deeper analysis. Using the --csv option with headless mode generates files with a specified prefix. Example command:
locust -f your_locustfile.py --headless -u 100 -r 10 -t 1m --csv=resultsThis runs a 1‑minute test with 100 users, spawning 10 users per second, and creates results_stats.csv (summary statistics) and results_stats_history.csv (time‑series data for RPS, response times, etc.).
3. Customizing statistics in the Locust script – For more detailed or specific metrics, you can add custom logic to your Locust script, such as event listeners that record successes, failures, or additional performance indicators. Example script:
from locust import HttpUser, task, between, events
class WebsiteUser(HttpUser):
wait_time = between(1, 5)
@task
def my_task(self):
with self.client.get("/my_endpoint", catch_response=True) as response:
if response.status_code != 200:
response.failure("Got wrong response")
else:
response.success()
# Define event handlers
@events.test_start.add_listener
def on_test_start(environment, **kwargs):
print("Test Started")
@events.request_success.add_listener
def on_request_success(request_type, name, response_time, response_length, **kwargs):
print(f"Request succeeded: {name}, Response time: {response_time}")
@events.request_failure.add_listener
def on_request_failure(request_type, name, response_time, exception, **kwargs):
print(f"Request failed: {name}, Exception: {exception}")This script not only sends requests but also marks them as successful or failed based on status codes and registers listeners to print detailed request outcomes.
4. Analyzing exported data – The generated CSV files can be opened in Excel or other data‑analysis tools to create charts that show trends such as RPS over time or changes in average response time, helping you understand system behavior.
5. Enhancing analysis with third‑party plugins or services – For richer visualizations, you can integrate Locust with Grafana and Prometheus, or use APM solutions like New Relic or Datadog, which provide advanced application‑level performance monitoring.
Test Development Learning Exchange
Test Development Learning Exchange
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.