Performance Monitoring with JMeter, InfluxDB, Prometheus, and Grafana
This article explains how to set up end‑to‑end performance monitoring by sending JMeter metrics to InfluxDB via Backend Listener, visualizing them in Grafana, and similarly collecting system metrics with node_exporter and Prometheus, covering configuration, data storage, query examples, and practical visualization techniques.
In this article we outline the most common monitoring components used in performance testing, focusing on the workflow from data collection to visualization.
JMeter + InfluxDB + Grafana data flow
JMeter can generate results via its console, plugins, or HTML reports, but for real‑time analysis we use the Backend Listener to push metrics asynchronously to InfluxDB (or Graphite). The listener sends transaction statistics such as TPS, response time, thread count, and error rate every 30 seconds.
The collected metrics are stored in InfluxDB in two measurements: events (for test events) and jmeter (for transaction statistics). Example InfluxDB commands:
> show databases
name: databases
name
----
_internal
jmeter
> use jmeter
Using database jmeter
> show MEASUREMENTS
name: measurements
name
----
events
jmeter
> select * from events where application='7ddemo'
name: events
time application text title
----
1575255462806000000 7ddemo Test Cycle1 started ApacheJMeter
...Grafana is configured with an InfluxDB data source and a pre‑built JMeter dashboard (ID 5496). After adding the data source and importing the dashboard, the same metrics appear in Grafana as they do in JMeter’s console.
Backend Listener configuration
The InfluxDB Backend Listener is added to the JMeter test plan. Its core code adds metrics for all, OK, and KO statuses:
private void addMetrics(String transaction, SamplerMetric metric) {
// FOR ALL STATUS
addMetric(transaction, metric.getTotal(), metric.getSentBytes(), metric.getReceivedBytes(), TAG_ALL, metric.getAllMean(), metric.getAllMinTime(),
metric.getAllMaxTime(), allPercentiles.values(), metric::getAllPercentile);
// FOR OK STATUS
addMetric(transaction, metric.getSuccesses(), null, null, TAG_OK, metric.getOkMean(), metric.getOkMinTime(),
metric.getOkMaxTime(), okPercentiles.values(), metric::getOkPercentile);
// FOR KO STATUS
addMetric(transaction, metric.getFailures(), null, null, TAG_KO, metric.getKoMean(), metric.getKoMinTime(),
metric.getKoMaxTime(), koPercentiles.values(), metric::getKoPercentile);
metric.getErrors().forEach((error, count) -> addErrorMetric(transaction, error.getResponseCode(),
error.getResponseMessage(), count));
}Metrics are then written to InfluxDB asynchronously:
@Override public void writeAndSendMetrics() {
if (!copyMetrics.isEmpty()) {
try {
if (httpRequest == null) {
httpRequest = createRequest(url);
}
StringBuilder sb = new StringBuilder(copyMetrics.size() * 35);
for (MetricTuple metric : copyMetrics) {
sb.append(metric.measurement)
.append(metric.tag)
.append(" ")
.append(metric.field)
.append(" ")
.append(metric.timestamp + "000000")
.append("\n");
}
StringEntity entity = new StringEntity(sb.toString(), StandardCharsets.UTF_8);
httpRequest.setEntity(entity);
lastRequest = httpClient.execute(httpRequest, new FutureCallback
() {
@Override public void completed(final HttpResponse response) {
int code = response.getStatusLine().getStatusCode();
if (MetricUtils.isSuccessCode(code)) {
if (log.isDebugEnabled()) {
log.debug("Success, number of metrics written: {}", copyMetrics.size());
}
} else {
log.error("Error writing metrics to influxDB Url: {}, responseCode: {}, responseBody: {}", url, code, getBody(response));
}
}
@Override public void failed(final Exception ex) {
log.error("failed to send data to influxDB server : {}", ex.getMessage());
}
@Override public void cancelled() {
log.warn("Request to influxDB server was cancelled");
}
});
} catch (Exception e) {
log.error("Exception while sending metrics", e);
}
}
}node_exporter + Prometheus + Grafana data flow
For system‑level monitoring we use node_exporter to expose OS metrics, Prometheus to scrape them, and Grafana to visualize. The node_exporter binary is started with:
# ./node_exporter --web.listen-address=:9200 &Prometheus is downloaded and configured to scrape the exporter:
# wget -c https://github.com/prometheus/prometheus/releases/download/v2.14.0/prometheus-2.14.0.linux-amd64.tar.gz
# tar -xzf prometheus-2.14.0.linux-amd64.tar.gz
# cd prometheus-2.14.0.linux-amd64
# echo "- job_name: 's1'\n static_configs:\n - targets: ['172.17.211.143:9200']" >> prometheus.yml
# ./prometheus --config.file=prometheus.yml &Grafana is then pointed to the Prometheus data source and a node_exporter dashboard (ID 11074) is imported. Example Prometheus query for CPU usage:
avg(irate(node_cpu_seconds_total{instance=~"$node",mode="system"}[30m])) by (instance)
avg(irate(node_cpu_seconds_total{instance=~"$node",mode="user"}[30m])) by (instance)
avg(irate(node_cpu_seconds_total{instance=~"$node",mode="iowait"}[30m])) by (instance)
1 - avg(irate(node_cpu_seconds_total{instance=~"$node",mode="idle"}[30m])) by (instance)The article concludes that understanding the source and meaning of metrics—whether viewed in Grafana or via command‑line tools—is essential for accurate performance analysis.
Architecture Digest
Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.