Operations 9 min read

Managing Java Process Memory in Kubernetes Pods to Prevent OOMKilled

This article explains why Java processes in Kubernetes pods often encounter OOMKilled despite correct JVM heap settings, analyzes the discrepancy between JVM‑reported memory and container metrics, and provides practical steps such as adjusting MaxRAMPercentage and pod memory limits to stabilize memory usage.

DevOps Cloud Academy
DevOps Cloud Academy
DevOps Cloud Academy
Managing Java Process Memory in Kubernetes Pods to Prevent OOMKilled

Managing the memory usage of Java processes running in Kubernetes pods is more challenging than expected; even with proper JVM memory configuration, OOMKilled events can still occur.

TL;DR

The JVM only considers heap memory limits, not non‑heap memory, and the default heap‑to‑non‑heap ratio starts at 75%. Adjusting the pod's memory limit or the heap‑to‑non‑heap ratio can help avoid OOMKilled incidents.

Context

In production, Java applications in Kubernetes repeatedly face OOMKilled and restart issues. Although memory settings are defined at both the pod and JVM levels, fluctuations in total pod memory usage cause frequent restarts.

Pod‑level configuration: memory limit set to 2Gi.

resources:
  requests:
    memory: "2Gi"
    cpu: "4"
  limits:
    memory: "2Gi"
    cpu: "4"

JVM‑level configuration: using -XX:MaxRAMPercentage=80.0 to let the JVM adapt to its environment.

Note that MaxRAMPercentage does not limit the total memory the Java process can use; it only defines the JVM heap size, as the heap is the only memory directly accessible by the application.

Initial Attempts

Increasing the pod memory limit from 2Gi to 4Gi reduced OOMKilled occurrences, but other issues remained, such as container_memory_working_set and container_memory_rss approaching 100% while JVM heap and non‑heap usage stayed low.

Analysis

Why is Java total memory usage far lower than system memory usage?

When the committed heap reaches its maximum, container_memory_working_set and container_memory_rss stop increasing. The JVM pre‑allocates memory (committed) from the OS, which appears as high usage from the container perspective even though actual heap and non‑heap usage are low.

public long getCommitted() returns the amount of memory guaranteed for the JVM to use.

The JVM retains this committed memory and does not release it easily, especially with G1 GC, leading to high container memory metrics without corresponding JVM heap pressure.

Why does WSS/RSS exceed JVM total memory?

Native Memory Tracking (NMT) shows the breakdown of memory usage, confirming that the discrepancy between container RSS and JVM committed memory is about 300 MiB.

Native Memory Tracking:
  Total: reserved=5066125KB, committed=3585293KB
  - Java Heap (reserved=3145728KB, committed=3145728KB)
  - Class (reserved=1150387KB, committed=113419KB)
  ...

Why does system memory usage stay near 100% after increasing pod limits?

The pod's resources.limits.memory determines the effective memory size, not resources.requests.memory. Reducing MaxRAMPercentage from 80% to 75% lowered the non‑heap allocation proportion, decreasing WSS/RSS and providing a safety margin.

Conclusion

To mitigate unpredictable Java memory usage and eliminate OOMKilled events in Kubernetes pods:

Start with a reasonable MaxRAMPercentage (e.g., 75%).

Continuously monitor heap usage and system memory (WSS/RSS).

If heap usage stays >90%, consider increasing the pod memory limit.

If heap usage is low but WSS/RSS is high, reduce MaxRAMPercentage to allocate more non‑heap memory.

Maintain a 5‑10% safety margin on pod memory limits.

JavaJVMmemory managementKubernetesOOMKilled
DevOps Cloud Academy
Written by

DevOps Cloud Academy

Exploring industry DevOps practices and technical expertise.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.