Why Your Java App’s CPU Spikes: Mastering C1/C2 JIT Threads
This article explains how HotSpot's C1 and C2 JIT compiler threads work, why they can consume excessive CPU, and provides practical JVM tuning options—including tiered compilation, code‑cache sizing, and compiler‑thread adjustments—to mitigate performance issues.
HotSpot JIT Overview
HotSpot JIT (Just‑In‑Time) is the default compiler in Oracle JDK and OpenJDK that transforms Java bytecode into native machine code at runtime. It contains two compilers: C1 (Client) for fast startup and modest optimization, and C2 (Server) for aggressive, high‑performance optimization.
Interpretation : The JVM initially interprets bytecode so the program can start immediately.
Just‑In‑Time Compilation : Frequently executed (hot) code is identified and compiled into optimized native code.
Execution of Native Code : The compiled native code is then executed directly, bypassing further interpretation.
Code Cache
The code cache is a dedicated JVM region that stores the native code produced by the JIT. Keeping compiled code in the cache avoids repeated compilation, reducing overhead and improving overall performance.
C1 vs C2 Compilers
C1 generates code quickly with modest optimizations, suitable for short‑lived or startup‑heavy applications. C2 performs deeper optimizations, taking more time but yielding faster code for long‑running, performance‑critical workloads. Modern JDKs include both compilers and employ tiered compilation: C1 runs initially, and as execution knowledge accumulates, C2 takes over for further optimization.
Default Compiler Thread Counts
CPU 1: C1 threads = 1, C2 threads = 1
CPU 2: C1 threads = 1, C2 threads = 1
CPU 4: C1 threads = 1, C2 threads = 2
CPU 8: C1 threads = 1, C2 threads = 2
CPU 16: C1 threads = 2, C2 threads = 6
CPU 32: C1 threads = 3, C2 threads = 7
CPU 64: C1 threads = 4, C2 threads = 8
CPU 128: C1 threads = 4, C2 threads = 10
Mitigating High CPU Usage by C1/C2 Threads
Do Nothing
If the CPU spikes are intermittent and do not noticeably affect application performance, you may temporarily ignore them, as they can be caused by normal JIT warm‑up or occasional recompilation.
Disable Tiered Compilation
Passing -XX:-TieredCompilation disables tiered (hot‑spot) compilation, preventing the JIT from dynamically optimizing hot code. This can lower CPU usage but will likely degrade overall performance.
Limit Tiered Level
Setting -XX:TieredStopAtLevel=3 disables the C2 compiler, keeping only C1 active. This reduces CPU consumption at the cost of potential performance loss; thorough testing is recommended.
Print Compilation Information
Enabling -XX:+PrintCompilation makes the JVM output detailed compilation events, showing which methods are compiled, when, and by which compiler (C1 or C2). This data helps pinpoint hot spots for targeted tuning, but the output can be verbose and may affect performance in production.
Adjust Code Cache Size
The default code‑cache size is 240 MB. You can increase it with -XX:ReservedCodeCacheSize=512m (or another value). A larger cache allows more compiled code to be retained, potentially reducing recompilation and CPU load, but it also consumes more heap memory.
Change Compiler Thread Count
Use -XX:CICompilerCount to control the number of C2 compiler threads. For example, -XX:CICompilerCount=8 forces eight C2 threads. More threads can improve compilation throughput on multi‑core systems, yet excessive threads may cause contention and degrade performance; always validate changes in a test environment.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
