JMH – Java Microbenchmark Harness: Introduction, Demo, and Annotation Guide
This article introduces JMH, the official Java microbenchmarking tool, explains why warm‑up is needed, shows how to build a Maven project, provides a complete LinkedList iteration benchmark example, demonstrates common JMH annotations, and outlines how to run and interpret benchmark results.
JMH – Java Microbenchmark Harness
In everyday development we often need to measure the performance of code or tools, and the most straightforward way is to run multiple iterations and record total execution time. However, because the JVM mixes JIT compilation with interpretation and continuously optimizes hot code, it is hard to determine how many repetitions are needed for stable results, so experienced developers usually add a warm‑up phase before measuring.
JMH (Java Microbenchmark Harness) is an official OpenJDK/Oracle framework designed for precise Java micro‑benchmarking, capable of measuring at the method level with microsecond accuracy.
Key Points for Java Benchmarking
Warm‑up before the test.
Avoid dead code elimination in benchmark methods.
Support for concurrent testing.
Clear presentation of results.
Typical Use Cases
Quantitatively analyze the optimization effect of a hotspot function.
Determine how long a function runs and how its execution time relates to input variables.
Compare multiple implementations of the same function.
The following sections demonstrate a complete JMH demo and explain the most common annotations.
Demo Demonstration
First, we build a JMH test project using Maven. The command line approach creates a new project:
$ mvn archetype:generate \
-DinteractiveMode=false \
-DarchetypeGroupId=org.openjdk.jmh \
-DarchetypeArtifactId=jmh-java-benchmark-archetype \
-DgroupId=org.sample \
-DartifactId=test \
-Dversion=1.0Alternatively, add the following dependencies to an existing Maven project:
<dependency>
<groupId>org.openjdk.jmh</groupId>
<artifactId>jmh-core</artifactId>
<version>${jmh.version}</version>
</dependency>
<dependency>
<groupId>org.openjdk.jmh</groupId>
<artifactId>jmh-generator-annprocess</artifactId>
<version>${jmh.version}</version>
<scope>provided</scope>
</dependency>We then write a benchmark that compares iterating a LinkedList via index access versus the foreach loop:
@State(Scope.Benchmark)
@OutputTimeUnit(TimeUnit.SECONDS)
@Threads(Threads.MAX)
public class LinkedListIterationBenchMark {
private static final int SIZE = 10000;
private List
list = new LinkedList<>();
@Setup
public void setUp() {
for (int i = 0; i < SIZE; i++) {
list.add(String.valueOf(i));
}
}
@Benchmark
@BenchmarkMode(Mode.Throughput)
public void forIndexIterate() {
for (int i = 0; i < list.size(); i++) {
list.get(i);
System.out.print("");
}
}
@Benchmark
@BenchmarkMode(Mode.Throughput)
public void forEachIterate() {
for (String s : list) {
System.out.print("");
}
}
}Running the benchmark can be done either by building a JAR and executing it:
$ mvn clean install
$ java -jar target/benchmarks.jaror directly from an IDE with a custom main method:
public static void main(String[] args) throws RunnerException {
Options opt = new OptionsBuilder()
.include(LinkedListIterationBenchMark.class.getSimpleName())
.forks(1)
.warmupIterations(2)
.measurementIterations(2)
.output("E:/Benchmark.log")
.build();
new Runner(opt).run();
}The output shows the throughput of each method, e.g., ~1192 ops/s for forEachIterate and ~207 ops/s for forIndexIterate , demonstrating the performance difference.
Annotation Overview
JMH provides several annotations to control benchmark behavior:
@BenchmarkMode – specifies the measurement mode (Throughput, AverageTime, SampleTime, SingleShotTime, All).
@Warmup – defines warm‑up iterations, e.g., @Warmup(iterations = 3) .
@Measurement – sets the number of measurement iterations and their duration.
@Threads – determines how many threads run the benchmark.
@Fork – number of separate JVM forks for the test.
@OutputTimeUnit – unit for reporting results (seconds, milliseconds, etc.).
@Benchmark – marks a method as a benchmark target.
@Param – defines a set of parameter values for a field.
@Setup and @TearDown – run code before and after each benchmark iteration.
@State – declares a class that holds shared state, with scopes Thread, Group, or Benchmark.
These annotations can be placed on methods or classes to fine‑tune the benchmarking process.
Conclusion
JMH can be used to benchmark a wide range of Java libraries and frameworks, such as logging libraries or bean‑copy utilities. For more examples, refer to the official JMH samples and related articles on common testing pitfalls.
References
https://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/
https://www.cnkirito.moe/java-jmh/
http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/
http://www.hollischuang.com/archives/1072
https://yq.aliyun.com/articles/341539?utm_content=m_39911
https://openjdk.java.net/projects/code-tools/jmh/
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.