Operations 5 min read

Analyzing Throughput Errors in JMeter Load Testing

JMeter’s reported throughput can be misleading because it includes local processing time, especially when response validation such as regex extraction adds overhead, leading to significant underestimation of actual server load; the article demonstrates this with experiments and suggests micro‑benchmark corrections to obtain accurate results.

FunTester
FunTester
FunTester
Analyzing Throughput Errors in JMeter Load Testing

JMeter’s reported throughput may be a false figure because it incorporates the time spent on the local machine during request processing.

According to classic theory, throughput (TPS or QPS) should be calculated as the number of concurrent threads divided by the average response time, or as the total request count divided by the total test duration:

tps = Thread / AVG(t)

or

tps = COUNT(request) / T

In the first example (average response 593 ms, 100 threads) the calculated throughput is 168.63 and JMeter reports 166.4, a negligible error. In the third example (average response 791 ms, 100 threads) the calculated throughput is 126.42 but JMeter reports only 92.3, a large discrepancy.

The investigation revealed that the large error occurs when the test script performs extensive regex matching to validate responses, suggesting that JMeter spends additional time on these local operations, which is then counted in the throughput calculation.

An experiment with a single‑thread script making ten requests showed an average response time of 207 ms and a JMeter‑reported throughput of 4.8, matching the expected 4.83.

When a Groovy post‑processor that sleeps for 500 ms per iteration was added (still single‑thread, ten requests), the average response time remained around 193 ms, but JMeter reported a throughput of only 1.5. Adding the sleep time (500 ms × 9/10) to the 193 ms gives 1.54, confirming that JMeter includes the local processing time in its throughput calculation.

Thus, JMeter measures throughput based on the total time a thread spends in one loop, including any local processing such as regex extraction, parameter validation, variable assignment, or explicit sleeps. This reduces the reported throughput and consequently the actual pressure applied to the server, potentially turning the data into misleading “fake” metrics.

The article recommends using micro‑benchmarking techniques to correct the results, as described in related posts, and to adjust the figures whenever a large error is observed to avoid treating the data as accurate.

performance testingJMeterload testingthroughputMicrobenchmark
FunTester
Written by

FunTester

10k followers, 1k articles | completely useless

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.