Mastering JMH: Essential Java Microbenchmark Techniques for Accurate Performance Testing
JMH is a Java harness that enables precise, reproducible microbenchmarking through annotations, state management, threading, and profiling, and this guide walks through its core features—benchmark modes, state scopes, setup/teardown, fork control, blackholes, and advanced options—illustrated with sample code and results.
Introduction
Benchmarking means designing scientific test methods, tools, and systems to quantitatively and comparably measure performance metrics of a class of test objects. JMH is a tool for building, running, and analysing nano/micro/milli/macro level benchmarks for Java or other JVM languages.
JMH is a Java harness for building, running, and analysing nano/micro/milli/macro benchmarks written in Java and other languages targeting the JVM.
Why JMH?
Some may wonder why not just add timestamps around code. This guide highlights common JMH features from official samples, answering that question.
Official Sample Walkthrough
(1) JMHSample 01 HelloWorld
The first example shows how to use JMH; just add the dependency and place tests in the test directory.
Use @Benchmark to mark methods and a main method to launch the benchmark. Run directly in IDE or package as a JAR for server execution. The console output includes environment/configuration, per‑run results, and a summary with min/avg/max.
(2) JMHSample 02 BenchmarkModes
Introduces @OutputTimeUnit and @BenchmarkMode.
@OutputTimeUnitsets the output time unit (down to nanoseconds). @BenchmarkMode selects the measurement mode:
Modes can be combined or all used together.
(3) JMHSample 03 States
Shows @State usage for multithreaded tests.
@State(Scope.Thread)– thread‑local scope. @State(Scope.Benchmark) – shared across the benchmark. @State(Scope.Group) – shared within a group.
JMH can inject these variables similarly to Spring.
(4) JMHSample 04 DefaultState
Demonstrates placing @State on the benchmark class itself, affecting all fields.
(5) JMHSample 05 StateFixtures
Introduces @Setup and @TearDown for initialization and cleanup.
(6) JMHSample 06 FixtureLevel
@Setupand @TearDown accept a Level parameter defining granularity:
Level.Trial – benchmark level.
Level.Iteration – iteration level.
Level.Invocation – per‑method‑call level.
(7) JMHSample 07 FixtureLevelInvocation
Shows using Level.Invocation to sleep after each method call, simulating extra latency.
(8) JMHSample 08 DeadCode
Explains Dead‑Code Elimination (DCE). The compiler may remove code that has no observable effect, causing misleading benchmark results. Adding a return statement prevents removal.
(9) JMHSample 09 Blackholes
Uses Blackhole to prevent DCE by consuming results.
(10) JMHSample 10 ConstantFold
Shows constant folding where compile‑time constant expressions are replaced, potentially breaking benchmarks. Final fields are also folded.
(11) JMHSample 11 Loops
Avoid loops in benchmarks because they can distort results, especially during warm‑up.
(12) JMHSample 12 Forking
@Forkcontrols whether benchmarks run in a separate JVM process; typically set to 1 to avoid interference.
(13) JMHSample 13 RunToRun
Multiple forks reduce variability caused by JVM complexities.
(15) JMHSample 15 Asymmetric
Demonstrates @Group and @GroupThreads for unbalanced thread workloads (e.g., three writers, one reader) using @State(Scope.Group).
(16) JMHSample 16 CompilerControl
Controls JVM method inlining with @CompilerControl:
DONT_INLINE – disable inlining.
INLINE – force inlining.
EXCLUDE – exclude from compilation.
(17) JMHSample 17 SyncIterations
Synchronises thread pool warm‑up and measurement to obtain accurate multithreaded results. Parameters like warmupTime, measurementTime, threads, forks, and syncIterations are illustrated.
(18) JMHSample 18 Control
Shows using Control to avoid deadlocks when testing asymmetric atomic operations.
(20) JMHSample 20 Annotations
All Options can be expressed as annotations on benchmark methods, simplifying configuration.
(21) JMHSample 21 ConsumeCPU
Blackhole.consumeCPUburns CPU cycles; the argument is proportional to the time slice.
(22) JMHSample 22 FalseSharing
Discusses false sharing and ways to mitigate it beyond JMH's automatic cache‑line padding.
(23) JMHSample 23 AuxCounters
@AuxCountersprovides auxiliary counting for @State objects, supporting EVENTS and OPERATIONS modes.
(24) JMHSample 24 Inheritance
Benchmarks defined in a superclass are inherited by subclasses, allowing composition at compile time.
(25) JMHSample 25 API_GA
Shows a programmatic API approach to writing benchmarks, which is more complex than annotation‑based usage.
(26) JMHSample 26 BatchSize
batchSizespecifies how many times a benchmark method is invoked per iteration.
(27) JMHSample 27 Params
@Paramenables running the same benchmark with multiple input values.
(28) JMHSample 28 BlackholeHelpers
Shows that Blackhole can be used in @Setup, @TearDown, and other helper methods.
(29) JMHSample 29 StatesDAG
Demonstrates nested @State definitions, an experimental feature.
(30) JMHSample 30 Interrupts
Uses JMH’s interrupt handling to break deadlocks during blocking operations like queue put / take.
(31) JMHSample 31 InfraParams
Shows three overridable parameters: BenchmarkParams, IterationParams, and ThreadParams.
(32) JMHSample 32 BulkWarmup
Three warm‑up modes:
WarmupMode.INDI – individual benchmark warm‑up.
WarmupMode.BULK – all benchmarks warmed before each run.
WarmupMode.BULK_INDI – bulk warm‑up plus individual warm‑up.
(33) JMHSample 33 SecurityManager
Shows injecting a custom SecurityManager, though practical use is limited.
(34) JMHSample 34 SafeLooping
Guidelines for constructing safe loops to avoid measurement bias.
(35) JMHSample 35 Profilers
Built‑in profilers for deeper analysis: ClassloaderProfiler, CompilerProfiler, GCProfiler, StackProfiler, PausesProfiler, HotspotThreadProfiler, HotspotRuntimeProfiler, HotspotMemoryProfiler, HotspotCompilationProfiler, HotspotClassloadingProfiler.
Conclusion
JMH offers convenience, professionalism, and accuracy for Java microbenchmarking through easy annotations, comprehensive measurement dimensions, and built‑in tools that avoid common pitfalls such as warm‑up bias, JVM forking, method inlining, and constant folding.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Xiao Lou's Tech Notes
Backend technology sharing, architecture design, performance optimization, source code reading, troubleshooting, and pitfall practices
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
