Comprehensive Guide to nGrinder: Architecture, Environment Setup, and Load‑Testing Process
This article introduces the powerful nGrinder load‑testing tool, explains its architecture and advantages over JMeter, walks through controller and agent deployment, script creation, data preparation, TPS control, parameterization, test execution, result analysis, and advanced features for building a stable, scalable distributed performance‑testing environment.
The article begins with an overview of nGrinder, a distributed high‑concurrency open‑source load‑testing platform developed by NHN, and compares it with Apache JMeter, highlighting nGrinder’s superior performance monitoring, concurrency handling, stability, and scalability.
It then describes the overall architecture, which consists of a controller (web UI, task distribution, monitoring) and multiple agents (execute test scripts, collect metrics). The workflow includes creating a console, requesting agents, distributing scripts/resources, running the test, and finally viewing reports.
Next, the guide details environment setup:
Download the latest nGrinder release (e.g., ngrinder-3.4 ).
Deploy the controller by placing grinder-controller-3.4.war into a Tomcat 8.0 /webapps directory and start the server.
Install agents by downloading the agent package from the controller UI, extracting it, and launching with sh run_agent.sh & (stop with sh stop_agent.sh ).
Optionally install the monitor component on target machines using sh run_monitor.sh .
The article then explains how to create load‑testing scripts via the controller UI, choose Groovy or Jython (preferring Groovy for performance), and configure request parameters, headers, cookies, etc.
To simulate realistic traffic, it suggests extracting real request data from access logs, splitting the data per agent/process/thread using the built‑in grinder.processNumber and grinder.threadNumber variables, and reading appropriate data slices in the script.
For stable TPS, the guide recommends adding a calculated Thread.sleep(waitTime) where waitTime = 1000ms - requestTime , ensuring each thread sends one request per second (or a configurable interval).
Script parameterization is achieved via the controller’s “param” field; scripts can retrieve values with System.getProperty("param") , allowing dynamic adjustment of process/thread counts without code changes.
Execution steps include selecting “Performance Test” in the UI, creating a test, configuring virtual users (vusers), processes, and threads, and optionally scheduling delayed execution. The article notes best‑practice limits (e.g., ≤200 threads per process, ≤5000 vusers per agent) and advises against co‑locating agents with the target service.
After running a test, users can view real‑time TPS, response times, and system metrics, as well as detailed CSV reports and custom data visualizations defined via grinder.statistics.registerSummaryExpression .
Finally, the guide covers extended features such as uploading third‑party JARs and data files to the lib and resource directories, and creating custom data views for additional metrics.
In conclusion, nGrinder provides a stable, extensible, and scalable solution for performance testing, especially when scripts are written in Groovy, enabling accurate capacity planning for high‑traffic web services.
58 Tech
Official tech channel of 58, a platform for tech innovation, sharing, and communication.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.