Operations 16 min read

How to Visualize JMeter Performance Data with Grafana, InfluxDB, and Prometheus

This article walks through setting up real‑time performance monitoring by sending JMeter metrics to InfluxDB via Backend Listener, visualizing them in Grafana, and extending the approach to system metrics with node_exporter, Prometheus, and Grafana, covering configuration steps, code snippets, and query examples.

Efficient Ops
Efficient Ops
Efficient Ops
How to Visualize JMeter Performance Data with Grafana, InfluxDB, and Prometheus

JMeter + InfluxDB + Grafana Data Display Logic

When using JMeter for load testing, the usual practice is to view results in the JMeter console, a plugin, or generated HTML. However, these methods are cumbersome for real‑time analysis. By configuring JMeter's Backend Listener to send metrics asynchronously to InfluxDB (or Graphite), we can visualize performance trends directly in Grafana.

The Backend Listener is supported from JMeter 2.13 (Graphite) and JMeter 3.3 (InfluxDB). Once enabled, JMeter streams metrics such as TPS, response time, thread count, and error rate to InfluxDB every 30 seconds (configurable via

#summariser.interval

).

JMeter Backend Listener Configuration

In the JMeter test plan, add a Backend Listener and select the InfluxDB client. Set the InfluxDB URL and an Application name (used as the measurement tag).

<code>private void addMetrics(String transaction, SamplerMetric metric) {
    // FOR ALL STATUS
    addMetric(transaction, metric.getTotal(), metric.getSentBytes(), metric.getReceivedBytes(), TAG_ALL,
        metric.getAllMean(), metric.getAllMinTime(), metric.getAllMaxTime(),
        allPercentiles.values(), metric::getAllPercentile);
    // FOR OK STATUS
    addMetric(transaction, metric.getSuccesses(), null, null, TAG_OK,
        metric.getOkMean(), metric.getOkMinTime(), metric.getOkMaxTime(),
        okPercentiles.values(), metric::getOkPercentile);
    // FOR KO STATUS
    addMetric(transaction, metric.getFailures(), null, null, TAG_KO,
        metric.getKoMean(), metric.getKoMinTime(), metric.getKoMaxTime(),
        koPercentiles.values(), metric::getKoPercentile);

    metric.getErrors().forEach((error, count) -> addErrorMetric(transaction,
        error.getResponseCode(), error.getResponseMessage(), count));
}
</code>

The collected metrics are then sent to InfluxDB:

<code>@Override public void writeAndSendMetrics() {
    if (!copyMetrics.isEmpty()) {
        try {
            if (httpRequest == null) {
                httpRequest = createRequest(url);
            }
            StringBuilder sb = new StringBuilder(copyMetrics.size() * 35);
            for (MetricTuple metric : copyMetrics) {
                sb.append(metric.measurement)
                  .append(metric.tag)
                  .append(" ")
                  .append(metric.field)
                  .append(" ")
                  .append(metric.timestamp + "000000")
                  .append("\n");
            }
            StringEntity entity = new StringEntity(sb.toString(), StandardCharsets.UTF_8);
            httpRequest.setEntity(entity);
            lastRequest = httpClient.execute(httpRequest, new FutureCallback<HttpResponse>() {
                @Override public void completed(final HttpResponse response) {
                    int code = response.getStatusLine().getStatusCode();
                    if (MetricUtils.isSuccessCode(code)) {
                        if (log.isDebugEnabled()) {
                            log.debug("Success, number of metrics written: {}", copyMetrics.size());
                        }
                    } else {
                        log.error("Error writing metrics to influxDB Url: {}, responseCode: {}, responseBody: {}", url, code, getBody(response));
                    }
                }
                @Override public void failed(final Exception ex) {
                    log.error("failed to send data to influxDB server : {}", ex.getMessage());
                }
                @Override public void cancelled() {
                    log.warn("Request to influxDB server was cancelled");
                }
            });
        } catch (Exception e) {
            log.error("Exception while sending metrics", e);
        }
    }
}
</code>

In InfluxDB two measurements are created: events (stores test‑level metadata) and jmeter (stores per‑transaction statistics). Grafana queries these measurements to draw TPS and 95th‑percentile response‑time curves.

Grafana Configuration

Add an InfluxDB data source (URL, database, user, password) and import the official JMeter dashboard (ID 5496). The dashboard automatically queries the

jmeter

measurement to display real‑time throughput and latency.

node_exporter + Prometheus + Grafana Data Display Logic

For system‑level monitoring, the typical stack is node_exporter → Prometheus → Grafana. node_exporter exposes OS counters (CPU, memory, disk, etc.) as Prometheus metrics.

Deploy node_exporter

Download the binary, make it executable, and start it:

<code># ./node_exporter --web.listen-address=:9200 &
</code>

Configure Prometheus

Add a scrape job to

prometheus.yml

:

<code>- job_name: 's1'
  static_configs:
  - targets: ['172.17.211.143:9200']
</code>

Start Prometheus with the configuration file:

<code># ./prometheus --config.file=prometheus.yml &
</code>

Grafana Setup for node_exporter

Create a Prometheus data source in Grafana and import an official node_exporter dashboard (ID 11074). The dashboard queries metrics such as

node_cpu_seconds_total

to compute CPU usage:

<code>avg(irate(node_cpu_seconds_total{instance=~"$node",mode="system"}[30m])) by (instance)
+ avg(irate(node_cpu_seconds_total{instance=~"$node",mode="user"}[30m])) by (instance)
+ avg(irate(node_cpu_seconds_total{instance=~"$node",mode="iowait"}[30m])) by (instance)
+ 1 - avg(irate(node_cpu_seconds_total{instance=~"$node",mode="idle"}[30m])) by (instance)
</code>

These queries retrieve the same counters that the Linux

top

command shows, confirming that Grafana visualizes the underlying OS metrics.

Summary

The article demonstrates how to replace manual JMeter HTML reports with a real‑time monitoring pipeline using JMeter Backend Listener, InfluxDB, Prometheus, and Grafana. Understanding the data source and its meaning is essential for accurate performance analysis and troubleshooting.

Performance MonitoringPrometheusJMeterInfluxDBGrafananode exporter
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.