Handling Large Data Sets in MySQL: Regular, Streaming, and Cursor Queries with MyBatis
The article explains how to process massive MySQL data sets—covering data migration, export, and batch handling—by comparing regular pagination, streaming queries using server‑side cursors, and cursor‑based fetchSize techniques, and provides concrete MyBatis code examples for each approach.
The article discusses common scenarios for massive data operations such as data migration, export, and batch processing, and explains why loading millions of rows into JVM memory can cause OOM and slow performance.
It presents three query strategies for handling large result sets in MySQL: regular query (full or paginated load), streaming query using a server‑side cursor that returns an iterator, and cursor query that controls fetch size.
For each method, the author provides MyBatis‑Plus code examples, showing how to define a mapper interface, use @Select , @Options with ResultSetType.FORWARD_ONLY and fetchSize , and optionally @ResultType with a ResultHandler for row‑by‑row processing.
The article also notes practical considerations: the connection must stay open for streaming, the cursor must be closed after use, and fetch size determines memory consumption; it compares non‑streaming versus streaming memory behavior.
Additional tips include using MySQL’s ResultSet.next() behavior, differences with Oracle, and the importance of clearing temporary containers after each batch.
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.