Databases 8 min read

Comprehensive Migration Plan for MongoDB to Alternative Data Stores

This article presents a complete MongoDB migration solution, detailing the migration rhythm, code refactoring using a decorator pattern, data source replacement with JImKV, MySQL and ES, bulk and incremental data transfer strategies, and deployment safeguards such as monitoring, gray release, and rollback to ensure a seamless cut‑over without service disruption.

JD Tech
JD Tech
JD Tech
Comprehensive Migration Plan for MongoDB to Alternative Data Stores

The article introduces a full migration plan for a legacy system that relies on MongoDB as its core data store. The current setup uses a 1‑primary‑1‑replica configuration with one database and two tables serving seven applications. The goal is to move MongoDB data to a different medium while ensuring no impact on online usage.

Migration Rhythm (2.1)

The overall rhythm includes:

1. Clarify scope – identify all business areas that use MongoDB, alongside existing MySQL sources.

2. Determine the target storage medium and performance standards to cover all existing workloads.

3. Refactor the DAO layer of the original data structures.

4. Implement dual‑write and perform data migration.

5. Conduct R2 traffic verification, regression testing, and data comparison.

6. Gradually increase traffic (cut‑over volume).

Code Refactoring / Data Heterogeneity (2.2)

A decorator pattern is adopted to uniformly control dual‑write logic (primary write, secondary write) and traffic‑cut logic, including offline handling. Existing direct MongoDB API calls are extracted to the DAO layer without changing business logic or interfaces, facilitating later cut‑over adaptation.

The chosen replacement data sources, based on the above principles, are JImKV (JD’s self‑developed middleware), MySQL, and Elasticsearch. The DAO layer is refactored accordingly.

Existing Data Migration (2.3)

方案

是否可实现

难度

使用大数据抽数任务

使用代码异步任务的方式

DRC同步

从mongo到数据库不支持

Considering the overall data volume (single table ~3 million rows), offline big‑data extraction is inefficient. A flexible code‑driven approach allows real‑time speed and scope adjustments. Data is split into two parts: already approved requests (stable, can be migrated directly) and in‑process requests (subject to change, migrated with double‑write during off‑peak hours).

Incremental Data Synchronization (2.4)

For create and update operations that do not include a status field, the process writes to MongoDB first; if MongoDB write succeeds, MySQL is written. If MySQL fails, an asynchronous MQ compensation task is triggered.

Deployment Three‑Sword Strategy (Gray Release / Monitoring / Rollback) (3.0)

After completing migration and code refactoring, the article discusses how to ensure a smooth production rollout without online issues.

Monitoring (3.1 & 3.2)

Incremental data comparison: after dual‑write, MQ triggers a query to compare new and old data in real time; mismatches generate alerts and are logged.

Bulk data comparison: traverse the entire old dataset, fetch corresponding new data, normalize objects, compare for consistency, and log anomalies.

Enhanced comparison logic introduces R2 traffic replay to accelerate verification.

Gray Release (3.3)

Traffic is cut over based on supplier and procurement whitelist plus a percentage rollout. ThreadLocal is used to store merchant information for per‑PIN traffic distribution across different threads.

Rollback (3.4)

Step 1: Verify writes to the new store; treat this as a gray test and roll back if issues arise.

Step 2: After verification, backfill any missing data in the new store.

Step 3: Switch primary writes to the new store.

Step 4: Once reads and writes are stable, decommission the old store.

Conclusion

The article thoroughly outlines the end‑to‑end process for migrating a production MongoDB environment, including migration rhythm, code refactoring with a decorator pattern, data source selection, bulk and incremental data handling, and deployment safeguards such as monitoring, gray release, and rollback to achieve a smooth cut‑over.

Data MigrationDistributed SystemsmonitoringdatabaseMongoDBDecorator PatternRollback
JD Tech
Written by

JD Tech

Official JD technology sharing platform. All the cutting‑edge JD tech, innovative insights, and open‑source solutions you’re looking for, all in one place.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.