Why Your Kubernetes Pods Restart: Understanding Container OOM and Memory Limits
This article explains how a Java application's memory usage can cause Kubernetes pod OOM, the role of namespaces and cgroups in container isolation and resource limiting, and practical steps to adjust JVM settings to prevent frequent restarts.
Recently a business developer reported that their application had a high memory usage rate, causing frequent restarts, prompting an investigation.
Monitoring the pod showed memory usage nearing its limit, while the JVM inside the container only used about 30% of its allocated heap, indicating that the pod itself was running out of memory and being killed by Kubernetes.
Kubernetes restarts pods to maintain the desired replica count, so the application appeared to restart after running for a while.
The container runs on a host using Linux namespaces for isolation and cgroups for resource limiting. Namespaces separate processes, while cgroups enforce limits on CPU, memory, disk, and bandwidth.
In Kubernetes, resource requests and limits are defined as follows:
resources:
requests:
memory: 1024Mi
cpu: 0.1
limits:
memory: 1024Mi
cpu: 4This manifest ensures each container receives at least 0.1 CPU core and 1024 MiB memory, with a maximum of 4 CPU cores.
Different Types of OOM
The issue was confirmed as a container OOM, which triggers a restart; Kubernetes logs an event with exit code 137 for such cases.
k8s memory overflow causing container exit results in an event log with exit code 137.
Although the JVM heap was set to 8 GB, the Java process also consumes off‑heap memory, easily exceeding the container’s 8 GB limit and causing OOM.
Cloud‑Native Optimizations
Since the application itself does not need that much memory, limiting the heap to 4 GB prevents container memory overflow and resolves the restart problem.
It is recommended to configure the JVM heap to less than two‑thirds of the container’s memory limit, leaving headroom for non‑heap usage.
Programmer DD
A tinkering programmer and author of "Spring Cloud Microservices in Action"
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
