Master Java Performance Testing with JMH: From Basics to Advanced Benchmarks
This article explains what benchmarking is, introduces the JMH framework for Java, shows how to add JMH dependencies, walks through a simple "Hello JMH" example with full source and output, demonstrates using JMH with Spring Boot, details the most important options and annotations, and highlights common pitfalls to avoid when writing reliable micro‑benchmarks.
What is Benchmarking?
Benchmarking is a scientific method of measuring the performance of a target (CPU, database, etc.) using well‑designed tests, tools, and environments to obtain quantitative, comparable results that help users choose hardware or software and guide developers in optimisation.
What is JMH?
JMH (Java Microbenchmark Harness) is a framework provided by OpenJDK for writing reliable, nanosecond‑level benchmarks for Java and JVM languages. It supports various time units, modes (throughput, average time, etc.), and produces trustworthy results.
Getting Started
For JDK 9+ JMH is bundled; for earlier JDKs add the following Maven dependencies:
<dependency>
<groupId>org.openjdk.jmh</groupId>
<artifactId>jmh-core</artifactId>
<version>1.32</version>
</dependency>
<dependency>
<groupId>org.openjdk.jmh</groupId>
<artifactId>jmh-generator-annprocess</artifactId>
<version>1.32</version>
</dependency>Example "Hello JMH" benchmark:
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.runner.Runner;
import org.openjdk.jmh.runner.RunnerException;
import org.openjdk.jmh.runner.options.Options;
import org.openjdk.jmh.runner.options.OptionsBuilder;
public class Example_01_HelloJMH {
@Benchmark
public String sayHello() {
return "HELLO JMH!";
}
public static void main(String[] args) throws RunnerException {
Options options = new OptionsBuilder()
.include(Example_01_HelloJMH.class.getSimpleName())
.forks(1)
.build();
new Runner(options).run();
}
}Running the benchmark produces output such as:
# JMH version: 1.32
# VM version: JDK 1.8.0_241, Java HotSpot(TM) 64‑Bit Server VM, 25.241‑b07
# Warmup: 5 iterations, 10 s each
# Measurement: 5 iterations, 10 s each
# Benchmark mode: Throughput, ops/time
# Benchmark: com.ziroom.test.Example_01_HelloJMH.sayHello
# Result "com.ziroom.test.Example_01_HelloJMH.sayHello": 2.96 ×10⁹ ± 1.72 ×10⁸ ops/s [Average]Using JMH with Spring Boot
Add the required imports and annotate the benchmark class:
import org.openjdk.jmh.annotations.*;
import org.openjdk.jmh.runner.Runner;
import org.openjdk.jmh.runner.RunnerException;
import org.openjdk.jmh.runner.options.Options;
import org.openjdk.jmh.runner.options.OptionsBuilder;
import org.springframework.boot.SpringApplication;
import org.springframework.context.ConfigurableApplicationContext;
import java.util.concurrent.TimeUnit;
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
@State(Scope.Benchmark)
public class SpringBootBenchMark {
public static void main(String[] args) throws RunnerException {
Options options = new OptionsBuilder()
.include(SpringBootBenchMark.class.getSimpleName())
.warmupIterations(3)
.measurementIterations(3)
.forks(3)
.build();
new Runner(options).run();
}
private ConfigurableApplicationContext springContext;
private TestController testController;
@Setup
public void setUp() {
springContext = SpringApplication.run(SpringbootJmhTestApplication.class);
testController = springContext.getBean(TestController.class);
}
@TearDown
public void tearDown() {
springContext.close();
}
@Benchmark
public void testStringBuffer() {
testController.testAService();
}
@Benchmark
public void testStringBuilder() {
testController.testBService();
}
}Detailed Options and Annotations
Key options include include (select benchmark classes), exclude, timeUnit, forks, warmupIterations, measurementIterations, and many others. Important annotations are @Benchmark, @BenchmarkMode, @State, @Setup, @TearDown, @Param, @Group, @GroupThreads, @OperationsPerInvocation, @AuxCounters, etc. Each controls how JMH generates, runs, and reports the benchmark.
For example, @OperationsPerInvocation(10) tells JMH that a single method call represents ten logical operations, affecting the reported throughput.
Common Pitfalls
Dead‑code elimination
Constant folding and propagation
Avoid loops inside the benchmark method unless they are part of the measured work
Use separate forks to isolate benchmarks
Method inlining
False sharing and cache‑line effects
Branch prediction
Multithreaded testing issues
Images
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
