Databases 19 min read

MySQL Data Migration with Double‑Write Using MyBatis Plugin

The article details how the vivo Game Center’s appointment service migrated billions of rows to a dedicated MySQL database with zero‑downtime by implementing a double‑write strategy via a custom MyBatis interceptor that mirrors updates and queries to the new database, handling full and incremental sync, primary‑key consistency, transaction limits, and final cut‑over steps.

Sohu Tech Products
Sohu Tech Products
Sohu Tech Products
MySQL Data Migration with Double‑Write Using MyBatis Plugin

Author: Li Gang from the vivo Internet Server Team.

This article presents a complete MySQL data migration workflow that successfully migrated billions of rows by selecting a double‑write strategy, refactoring business code, and performing full‑ and incremental synchronization.

Background

The appointment service is a critical component of the vivo Game Center. Historically, its tables were stored in the same database as other services, so performance issues in unrelated services could affect appointment reliability. To improve stability and achieve data isolation, the appointment tables needed to be moved to a dedicated database.

Solution Selection

Several migration approaches were evaluated. Because the appointment service requires high read/write frequency, zero‑downtime, and can tolerate only second‑level data inconsistency, a double‑write solution was chosen over shutdown migration or progressive migration.

The double‑write approach allows each operation to be rolled back if necessary and reduces the cost of per‑scenario refactoring by using a MyBatis plugin to implement the required functionality.

Preparation

1. Full‑sync and incremental sync were performed using an internal data‑sync tool (full sync via MySQLDump, incremental sync via binlog, consistency check by comparing aggregated column hashes).

2. Code refactoring introduced a new data source and MyBatis mappers for the new database. Because the default MyBatis BeanNameGenerator ( AnnotationBeanNameGenerator ) registers beans by class name, a custom generator that uses the fully‑qualified class name was added to avoid duplicate bean names.

public class FullPathBeanNameGenerator implements BeanNameGenerator {
    @Override
    public String generateBeanName(BeanDefinition definition, BeanDefinitionRegistry registry) {
        return definition.getBeanClassName();
    }
}

Primary key IDs are auto‑increment values generated by MySQL; the migration ensured that useGeneratedKeys was enabled so that the same IDs could be used in the new database.

MyBatis Plugin Implementation

The plugin intercepts the Executor ’s update and query methods. It obtains the original mapper path, converts it to the new‑database mapper, creates a new SqlSession , and invokes the corresponding method via reflection.

@Intercepts({
    @Signature(type = Executor.class, method = "update", args = {MappedStatement.class, Object.class}),
    @Signature(type = Executor.class, method = "query", args = {MappedStatement.class, Object.class, RowBounds.class, ResultHandler.class}),
    @Signature(type = Executor.class, method = "query", args = {MappedStatement.class, Object.class, RowBounds.class, ResultHandler.class, CacheKey.class, BoundSql.class})
})
public class AppointMigrateInterceptor implements Interceptor {
    @Override
    public Object intercept(Invocation invocation) throws Throwable {
        Object[] args = invocation.getArgs();
        // MyBatis plugin proxies Executor.update or Executor.query; first argument is MappedStatement
        MappedStatement ms = (MappedStatement) args[0];
        SqlCommandType sqlCommandType = ms.getSqlCommandType();
        String id = ms.getId();
        // Extract the mapper class path from the statement id
        String sourceMapper = id.substring(0, id.lastIndexOf("."));
        // ...
    }
}

Helper methods retrieve the target method and its parameters:

protected Object invoke(Invocation invocation, TableConfiguration tableConfiguration) throws NoSuchMethodException, InvocationTargetException, IllegalAccessException {
    MappedStatement ms = (MappedStatement) invocation.getArgs()[0];
    Object parameter = invocation.getArgs()[1];
    Class
targetMapperClass = tableConfiguration.getTargetMapperClazz();
    SqlSession sqlSession = sqlSessionFactory.openSession();
    Object result = null;
    try {
        Object mapper = sqlSession.getMapper(targetMapperClass);
        Object[] paramValues = getParamValue(parameter);
        Method method = getMethod(ms.getId(), targetMapperClass, paramValues);
        paramValues = fixNullParam(method, paramValues);
        result = method.invoke(mapper, paramValues);
    } finally {
        sqlSession.close();
    }
    return result;
}

private Method getMethod(String id, Class mapperClass) throws NoSuchMethodException {
    String methodName = id.substring(id.lastIndexOf(".") + 1);
    String key = id;
    Method method = methodCache.get(key);
    if (method == null) {
        method = findMethodByMethodSignature(mapperClass, methodName);
        if (method == null) {
            throw new NoSuchMethodException("No such method " + methodName + " in class " + mapperClass.getName());
        }
        methodCache.put(key, method);
    }
    return method;
}

private Method findMethodByMethodSignature(Class mapperClass, String methodName) throws NoSuchMethodException {
    // MyBatis mappers do not support overload, so name match is sufficient
    for (Method m : mapperClass.getMethods()) {
        if (m.getName().equals(methodName)) {
            return m;
        }
    }
    return null;
}

Parameter conversion reverses the internal MapperMethod.convertArgsToSqlCommandParam process:

private Object[] getParamValue(Object parameter) {
    List
paramValues = new ArrayList<>();
    if (parameter instanceof Map) {
        Map
paramMap = (Map
) parameter;
        if (paramMap.containsKey("collection")) {
            paramValues.add(paramMap.get("collection"));
        } else if (paramMap.containsKey("array")) {
            paramValues.add(paramMap.get("array"));
        } else {
            int count = 1;
            while (count <= paramMap.size() / 2) {
                try {
                    paramValues.add(paramMap.get("param" + (count++)));
                } catch (BindingException e) {
                    break;
                }
            }
        }
    } else if (parameter != null) {
        paramValues.add(parameter);
    }
    return paramValues.toArray();
}

Additional utility to handle null parameters:

private Object[] fixNullParam(Method method, Object[] paramValues) {
    if (method.getParameterTypes().length > 0 && paramValues.length == 0) {
        return new Object[]{null};
    }
    return paramValues;
}

Double‑Write Process

The migration proceeds through the following stages:

Deploy the double‑write‑enabled code but keep both read and write switches pointing to the old database.

Use the internal data‑sync tool to perform full‑ and incremental sync from old to new database.

Stop the sync tool, enable the write switch for double‑write, and optionally turn on the read‑compare switch.

Run a consistency check and start a compare‑and‑compensate task to reconcile any differences.

Gradually shift read traffic to the new database while keeping double‑write active.

Finally, disable double‑write, switch reads and writes entirely to the new database, and run a reverse compensation task to sync any remaining data back to the old database before cleaning up migration code.

Key operational details include:

Ensuring primary‑key IDs remain consistent; special handling is required for INSERT IGNORE or ON DUPLICATE KEY UPDATE statements because useGeneratedKeys may produce mismatched IDs.

Transaction‑related logic was paused during double‑write because the plugin does not support transactional guarantees.

Asynchronous writes to the new database can cause data‑consistency issues if the in‑memory objects are mutated after the old‑database write; therefore, business logic must be reviewed before migration.

Reverse compensation is preferred over asynchronous writes to the old database because the MyBatis plugin cannot guarantee that the old‑database write completes after the executor is closed.

Summary

The migration was completed with a series of rollback‑capable steps, extensive monitoring, and careful handling of edge cases such as auto‑increment IDs, transaction boundaries, and asynchronous write hazards. The experience suggests that for similar large‑scale migrations, attention should be paid to plugin implementation, primary‑key consistency, transaction support, and asynchronous write side‑effects.

data migrationdatabasepluginMySQLMyBatisDouble Write
Sohu Tech Products
Written by

Sohu Tech Products

A knowledge-sharing platform for Sohu's technology products. As a leading Chinese internet brand with media, video, search, and gaming services and over 700 million users, Sohu continuously drives tech innovation and practice. We’ll share practical insights and tech news here.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.