Databases 7 min read

How to Seamlessly Migrate a Legacy MongoDB System to New Storage

This article presents a complete, step‑by‑step migration plan for a legacy MongoDB‑based system, covering scope analysis, data‑store selection, DAO refactoring, dual‑write synchronization, bulk and incremental data migration, and the three‑pronged deployment strategy of monitoring, gray‑release and rollback to ensure a smooth cut‑over without service disruption.

JD Retail Technology
JD Retail Technology
JD Retail Technology
How to Seamlessly Migrate a Legacy MongoDB System to New Storage

1. Current Situation

The target system uses MongoDB as its core data store in a 1‑primary‑1‑secondary replica set, serving seven applications with one database and two tables. Although the architecture is simple, it is critical to the business and must be migrated to a new storage medium without affecting online usage.

2. Migration Plan

2.1 Migration Rhythm

Identify the full scope: enumerate all business functions that rely on MongoDB (the system also uses MySQL).

Determine the target storage medium and performance requirements to cover existing read/write workloads.

Refactor the DAO layer of the existing data structures.

Implement dual‑write to keep both old and new stores synchronized.

Validate traffic with R2 replay, perform regression testing and data comparison.

Gradually shift traffic (cut‑over) according to a controlled release schedule.

2.2 Code Refactor & Data Heterogeneity

A decorator pattern is used to centralize dual‑write logic (primary and secondary writes) and traffic‑cut logic. Direct MongoDB API calls are extracted to the DAO layer without changing business logic or method signatures, preserving existing behavior while enabling future traffic‑switch adaptations.

Based on these principles, the team selected JImKV (a JD‑developed middleware) together with MySQL and Elasticsearch as replacements for MongoDB. The DAO layer is modified to route calls to the appropriate data source.

2.3 Bulk Data Migration

The total data volume is modest (≈3 million rows per table). Offline bulk processing is inefficient, so a flexible code‑driven approach is used, allowing dynamic adjustment of migration speed and scope. Data is divided into two categories:

Approved applications that will not change further—these can be migrated and compared at any time.

In‑process applications where data may change continuously—migration occurs during off‑peak hours with dual‑write enabled.

2.4 Incremental Data Sync

For new or updated application requests that do not contain a status field, the workflow is:

Write to MongoDB first; if successful, write to MySQL. If the MySQL write fails, an asynchronous compensation is triggered via MQ.

3. Deployment Three Essentials

3.1 Monitoring (Data Comparison Logic)

After dual‑write, an MQ message triggers a real‑time comparison between the new and old stores. Inconsistent records are logged, flagged for business alerts, and exported for analysis.

Full‑store comparison traverses all old‑store data, fetches corresponding new‑store records, normalizes them, and logs any mismatches.

3.2 Monitoring (Comparison Read Logic)

The comparison logic incorporates R2 traffic replay to accelerate verification.

3.3 Gray Release (Read Traffic Splitting)

Read traffic is split based on supplier/consumer white‑lists and a configurable percentage. ThreadLocal is used to store merchant information for cross‑thread traffic distribution.

3.4 Rollback (Write Traffic Splitting)

Validate that writes to the new store succeed; if not, revert immediately.

After validation, backfill any missing data in the new store.

Switch primary writes to the new store.

When the new store is stable, decommission the old MongoDB cluster.

4. Summary

The article outlines a full‑process migration for a production environment, emphasizing careful data‑source selection, appropriate design patterns, and robust DAO refactoring. Bulk migration and incremental synchronization are essential steps, while comprehensive monitoring, gray‑release, and rollback mechanisms guarantee a smooth, risk‑free cut‑over.

architectureDatabase MigrationMongoDB
JD Retail Technology
Written by

JD Retail Technology

Official platform of JD Retail Technology, delivering insightful R&D news and a deep look into the lives and work of technologists.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.