Why MyBatis Triggers OOM and How to Fix It in Java Services
The article explains the root causes of frequent OutOfMemoryError in a Java backend service, analyzes how MyBatis’s SQL building can exhaust heap memory, demonstrates a reproducible test case, and offers practical recommendations to prevent and resolve such memory leaks.
Preface
In a recent production incident the service repeatedly threw OutOfMemoryError (OOM), causing a complete outage. After an emergency restart the root cause needed to be identified and fixed.
Reasons for OutOfMemoryError
OOM generally stems from two sources: insufficient heap space and insufficient metaspace.
Heap exhaustion occurs when objects remain strongly referenced and cannot be reclaimed, eventually exceeding the -Xmx limit.
Metaspace (introduced in Java 8) stores class metadata outside the heap; excessive class loading can also lead to OOM.
Common Heap OOM Scenarios
Loading excessively large query results into memory.
Infinite loops that retain large objects.
Connection pools or I/O streams not closed properly.
Static collections that keep references indefinitely.
These are typical cases, though real‑world problems can be more obscure.
Phenomenon Analysis
The production logs showed an OOM triggered by MyBatis. MyBatis builds SQL statements using internal collections; when a query becomes very large, the collection holding the SQL and its parameters grows dramatically, preventing garbage collection.
Because the Docker container lacked jstack and jmap, and no heap dump was saved, direct thread‑level analysis was impossible.
MyBatis Source Code Insight
Inspecting DynamicContext reveals a ContextMap (a HashMap) named bindings. ForEachSqlNode calls getBindings() and puts SQL parameters and placeholders into this map. Under high concurrency, the map retains large SQL strings and parameter objects, preventing GC and causing OOM.
Reproducing the Issue
To replicate the problem, the SQL IN clause was enlarged and 50 threads were launched concurrently. The JVM was started with -Xmx256m -XX:+PrintGCDetails -XX:+HeapDumpOnOutOfMemoryError. The console showed frequent Full GC cycles leading to OOM.
The logs confirmed continuous Full GC and eventual OOM.
Conclusion
After identifying the cause, the remedy is to optimize SQL generation: avoid excessively large concatenated statements, limit the size of IN clauses, and ensure collections holding SQL fragments are cleared promptly. Careful coding and SQL design prevent unpredictable OOM crashes in production.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Java High-Performance Architecture
Sharing Java development articles and resources, including SSM architecture and the Spring ecosystem (Spring Boot, Spring Cloud, MyBatis, Dubbo, Docker), Zookeeper, Redis, architecture design, microservices, message queues, Git, etc.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
