MySQL Data Migration with Double‑Write Strategy Using a MyBatis Plugin
The article details a zero‑downtime migration of a high‑traffic reservation service from a shared MySQL instance to an isolated database by employing a double‑write strategy implemented through a custom MyBatis plugin, covering full and incremental sync, consistency verification, traffic shifting, and key lessons on primary‑key handling and asynchronous write side‑effects.
This article presents a complete case study of migrating a high‑traffic reservation service from a shared MySQL instance to an isolated database. The migration was performed by a double‑write approach implemented via a custom MyBatis plugin, allowing seamless data sync while keeping the service online.
Background
The reservation business of Vivo Game Center stored its tables together with other services. Slow SQL or other anomalies in those services could affect reservation reliability, prompting the need to isolate the reservation data into a separate database.
Solution Selection
Four migration options were evaluated (offline dump/restore, online replication, incremental sync, and double‑write). Because the service cannot tolerate downtime and must handle high read/write volume, the double‑write scheme was chosen. It provides per‑step rollback and reduces code‑modification cost by using a MyBatis plugin.
Preparation
Three preparatory steps were performed:
Full data sync using a company‑provided tool (MySQLDump for full sync, binlog for incremental sync).
Consistency verification by comparing chunked aggregates between source and target.
Code refactoring to add a new data source and MyBatis mapper classes.
During refactoring, a custom BeanNameGenerator was introduced to avoid Spring bean name collisions:
public class FullPathBeanNameGenerator implements BeanNameGenerator {
@Override
public String generateBeanName(BeanDefinition definition, BeanDefinitionRegistry registry) {
return definition.getBeanClassName();
}
}Primary keys are kept consistent by ensuring all insert statements use useGeneratedKeys . The migration also required checking that every insert correctly returns the generated ID.
MyBatis Plugin Implementation
The plugin intercepts Executor.update and Executor.query methods. It extracts the original MappedStatement , determines the corresponding mapper in the new database, and invokes the same method via reflection.
@Intercepts({
@Signature(type = Executor.class, method = "update", args = {MappedStatement.class, Object.class}),
@Signature(type = Executor.class, method = "query", args = {MappedStatement.class, Object.class, RowBounds.class, ResultHandler.class}),
@Signature(type = Executor.class, method = "query", args = {MappedStatement.class, Object.class, RowBounds.class, ResultHandler.class, CacheKey.class, BoundSql.class})
})
public class AppointMigrateInterceptor implements Interceptor {
@Override
public Object intercept(Invocation invocation) throws Throwable {
Object[] args = invocation.getArgs();
// MyBatis plugin proxies Executor.update or query; first arg is MappedStatement
MappedStatement ms = (MappedStatement) args[0];
SqlCommandType sqlCommandType = ms.getSqlCommandType();
String id = ms.getId();
// Extract original mapper class full name
String sourceMapper = id.substring(0, id.lastIndexOf('.'));
// ... (logic to map to target mapper and invoke)
}
}Helper methods retrieve the target mapper class, open a new SqlSession , and invoke the method:
protected Object invoke(Invocation invocation, TableConfiguration tableConfiguration) throws NoSuchMethodException, InvocationTargetException, IllegalAccessException {
MappedStatement ms = (MappedStatement) invocation.getArgs()[0];
Object parameter = invocation.getArgs()[1];
Class
targetMapperClass = tableConfiguration.getTargetMapperClazz();
SqlSession sqlSession = sqlSessionFactory.openSession();
Object result = null;
try {
Object mapper = sqlSession.getMapper(targetMapperClass);
Object[] paramValues = getParamValue(parameter);
Method method = getMethod(ms.getId(), targetMapperClass, paramValues);
paramValues = fixNullParam(method, paramValues);
result = method.invoke(mapper, paramValues);
} finally {
sqlSession.close();
}
return result;
}Method lookup and parameter conversion utilities are also provided:
private Method getMethod(String id, Class mapperClass) throws NoSuchMethodException {
String methodName = id.substring(id.lastIndexOf('.') + 1);
Method method = methodCache.get(id);
if (method == null) {
method = findMethodByMethodSignature(mapperClass, methodName);
if (method == null) {
throw new NoSuchMethodException("No such method " + methodName + " in class " + mapperClass.getName());
}
methodCache.put(id, method);
}
return method;
}
private Method findMethodByMethodSignature(Class mapperClass, String methodName) throws NoSuchMethodException {
for (Method m : mapperClass.getMethods()) {
if (m.getName().equals(methodName)) {
return m;
}
}
return null;
}
private Object[] getParamValue(Object parameter) {
List
paramValues = new ArrayList<>();
if (parameter instanceof Map) {
Map
paramMap = (Map
) parameter;
if (paramMap.containsKey("collection")) {
paramValues.add(paramMap.get("collection"));
} else if (paramMap.containsKey("array")) {
paramValues.add(paramMap.get("array"));
} else {
int count = 1;
while (count <= paramMap.size() / 2) {
try {
paramValues.add(paramMap.get("param" + (count++)));
} catch (BindingException e) {
break;
}
}
}
} else if (parameter != null) {
paramValues.add(parameter);
}
return paramValues.toArray();
}Double‑Write Process
The migration follows these stages:
Initial Phase : Only the old database is read and written; switches for new‑db reads/writes are off.
Full & Incremental Sync : Use the internal data‑sync tool to copy existing data to the new DB.
Enable Double‑Write : Turn on the write switch (both old and new DB receive writes) and optionally enable read‑compare and compensation tasks.
Consistency Checks : After the old DB catches up, run full‑data verification and monitor query‑result comparison logs.
Gradual Traffic Shift : Slowly route read traffic to the new DB while keeping writes double‑written, then fully switch reads/writes to the new DB.
Reverse Compensation : After the new DB is primary, run a reverse compensation job to sync any missed data back to the old DB (useful for rollback safety).
Cleanup : Disable compensation tasks, remove migration code, and decommission the old database.
Key operational details include handling auto‑increment primary keys (ensuring useGeneratedKeys is used, dealing with batch INSERT IGNORE anomalies, and recognizing that asynchronous writes can cause data inconsistency if the object state changes after the old‑DB write.
Challenges & Lessons Learned
Auto‑increment ID mismatches in batch inserts require special handling or skipping such statements in the plugin.
Transactional sections were paused because the double‑write plugin does not support distributed transactions.
Asynchronous writes to the new DB can lead to race conditions; careful code review is needed.
MyBatis plugins cannot safely implement “write new DB then async write old DB” because the original executor may be closed before the async task finishes.
Conclusion
The double‑write migration proved feasible for a billion‑row table with zero downtime. Each step included rollback measures, and extensive monitoring allowed quick issue detection. The experience highlights the importance of thorough code adaptation, primary‑key consistency, and awareness of asynchronous side‑effects when designing online data migrations.
vivo Internet Technology
Sharing practical vivo Internet technology insights and salon events, plus the latest industry news and hot conferences.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.