Cloud Native 17 min read

Why JVM Gets OOMKilled in Kubernetes: Exit Code 137 and Memory Limits Explained

This article explains why Java applications running in Kubernetes containers are often terminated with OOMKilled and Exit Code 137, analyzes how the JVM perceives container memory limits, and provides practical flag configurations and code examples to prevent out‑of‑memory crashes.

MaGe Linux Operations
MaGe Linux Operations
MaGe Linux Operations
Why JVM Gets OOMKilled in Kubernetes: Exit Code 137 and Memory Limits Explained

In everyday work we usually deploy applications with Kubernetes, but problems such as the JVM heap being smaller than the Docker container memory and the Kubernetes memory limit still lead to OOMKilled. Below is an introduction to the OOMKilled exit code in K8s.

Exit Code 137

Indicates the container received a SIGKILL signal (kill -9), which can be issued by the user or the Docker daemon.

Code 137 is common; if a pod’s resource limit is set too low, the process runs out of memory, the state shows "OOMKilled": true, and the OOM log appears in dmesg -T. Even though the heap size is smaller than the Docker container and pod size, OOMKilled can still occur.

Root Cause Analysis

After JDK 8u131 or JDK 9, JVM running inside containers defaults to using the host node’s memory for the native VM space (heap, direct memory, stack) instead of the container’s limits.

Example on a machine:

$ docker run -m 100MB openjdk:8u121 java -XshowSettings:vm -version
VM settings:
    Max. Heap Size (Estimated): 444.50M
    Ergonomics Machine Class: server
    Using VM: OpenJDK 64-Bit Server VM

The container is limited to 100 MB, yet the JVM reports a maximum heap of 444 MB, which can cause the host to kill the JVM.

Solution

JVM Awareness of cgroup Limits

Enable the JVM to detect Docker cgroup limits and adjust heap size dynamically. In JDK 8u131 and JDK 9 set the flags

-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap

to make the JVM respect container memory.

Note: If you also set -Xms and -Xmx , the -Xmx value overrides -XX:+UseCGroupMemoryLimitForHeap .

Summary:

The flag -XX:+UseCGroupMemoryLimitForHeap lets the JVM detect the container’s maximum heap. -Xmx sets a fixed maximum heap size.

Besides heap, non‑heap memory also consumes container memory.

Trying Container‑Aware Mechanism with JDK 9

$ docker run -m 100MB openjdk:8u131 java \
  -XX:+UnlockExperimentalVMOptions \
  -XX:+UseCGroupMemoryLimitForHeap \
  -XshowSettings:vm -version
VM settings:
    Max. Heap Size (Estimated): 44.50M
    Ergonomics Machine Class: server
    Using VM: OpenJDK 64-Bit Server VM

After enabling memory awareness, the JVM detects the 100 MB container and sets the max heap to 44 MB. Increasing the container limit to 1 GB:

$ docker run -m 1GB openjdk:8u131 java \
  -XX:+UnlockExperimentalVMOptions \
  -XX:+UseCGroupMemoryLimitForHeap \
  -XshowSettings:vm -version
VM settings:
    Max. Heap Size (Estimated): 228.00M
    Ergonomics Machine Class: server
    Using VM: OpenJDK 64-Bit Server VM

With -XX:MaxRAMFraction=1 almost all available memory can be used for the heap, reaching 910.5 MB in a 1 GB container.

Using the entire container memory for the heap can be risky; -XX:MaxRAMFraction=2 limits the heap to 50 % of container memory, or use -Xmx instead.

Container‑Internal cgroup Resource Detection

Since Docker 1.7, cgroup information is mounted inside the container (e.g., /sys/fs/cgroup/memory/memory.limit_in_bytes), allowing applications to read memory and CPU limits and set appropriate JVM flags such as -Xmx and -XX:ParallelGCThreads.

Improvements in Java 10+

Java 10+ removes the need for -XX:MaxRAM; the JVM automatically detects container limits and uses 1/4 of container memory for the heap.

New flags like -XX:MaxRAMPercentage (e.g., -XX:MaxRAMPercentage=75) provide finer control. UseContainerSupport is enabled by default; -XX:+UseCGroupMemoryLimitForHeap is deprecated. Use -XX:InitialRAMPercentage, -XX:MaxRAMPercentage, and -XX:MinRAMPercentage for precise tuning.

Even when running Java in a container, native memory and external processes still need some reserve, so -XX:MaxRAMPercentage should not be set too high. -XX:MaxRAMFraction=1 can be used to allocate almost all memory to the heap.

Beyond heap, off‑heap memory (Direct buffer memory) also contributes to container memory usage. The following diagram illustrates the memory layout:

Memory layout diagram
Memory layout diagram

JVM Parameter -XX:MaxDirectMemorySize

This flag limits the space that DirectByteBuffer can allocate. If not set, the default equals the value of -Xmx (minus survivor space in older versions).

-XX:MaxDirectMemorySize

It defines the maximum size for NIO direct‑buffer allocations; the unit can be k/K, m/M, g/G. If omitted, the JVM automatically chooses the maximum size.

Default Value of -XX:MaxDirectMemorySize

In sun.misc.VM, the default is Runtime.getRuntime().maxMemory(), which reflects the -Xmx setting. The HotSpot source jvm.cpp converts the flag to the system property sun.nio.MaxDirectMemorySize. If the flag is not supplied, the property is left empty and later set to Runtime.getRuntime().maxMemory().

if (!FLAG_IS_DEFAULT(MaxDirectMemorySize)) {
    // convert flag to property
    ...
}

The VM method maxDirectMemory() returns the value stored in the static field directMemory. During initialization, if the property sun.nio.MaxDirectMemorySize is null, empty, or "-1", the JVM assigns Runtime.getRuntime().maxMemory() to directMemory.

if (s == null || s.isEmpty() || s.equals("-1")) {
    directMemory = Runtime.getRuntime().maxMemory();
} else {
    long l = Long.parseLong(s);
    if (l > -1) directMemory = l;
}

Thus, when -XX:MaxDirectMemorySize is not explicitly set, its effective limit equals -Xmx minus a survivor space.

Conclusion

If -XX:MaxDirectMemorySize is not configured, the NIO direct memory limit is effectively -Xmx minus a survivor space. For example, with -Xmx5g and no explicit MaxDirectMemorySize, the default limit becomes roughly 5 GB – survivor, and total heap + direct memory can grow to about 10 GB.

Other APIs to Obtain maxDirectMemory

Use BufferPoolMXBean or java.nio.BufferPool (via SharedSecrets) to query the current direct memory usage. In Java 9+, SharedSecrets moved to jdk.internal.access and requires the JVM option --add-exports java.base/jdk.internal.access=ALL-UNNAMED to access.

public BufferPoolMXBean getDirectBufferPoolMBean(){
    return ManagementFactory.getPlatformMXBeans(BufferPoolMXBean.class)
        .stream()
        .filter(e -> e.getName().equals("direct"))
        .findFirst()
        .orElseThrow();
}
public JavaNioAccess.BufferPool getNioBufferPool(){
    return SharedSecrets.getJavaNioAccess().getDirectBufferPool();
}

Memory Analysis Issues

When -XX:+DisableExplicitGC is used, calls to System.gc() become no‑ops. Young‑generation GC can reclaim unreachable DirectByteBuffer objects and their off‑heap memory, but old‑generation buffers remain, potentially exhausting physical memory if full GC is not triggered.

Memory analysis illustration
Memory analysis illustration
JavaJVMDockerKubernetesMemoryOOMKilled
MaGe Linux Operations
Written by

MaGe Linux Operations

Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.