Backend Development 17 min read

Design and Implementation of a Double‑Write Migration Strategy for the Appointment Service Using a MyBatis Plugin

This article details the background, requirements, and evaluation of migration options for the appointment service, explains why a double‑write approach with a custom MyBatis plugin was chosen, and walks through the full‑sync, incremental sync, code refactoring, plugin implementation, switch‑over procedures, and post‑migration validation to achieve reliable data isolation and system stability.

High Availability Architecture
High Availability Architecture
High Availability Architecture
Design and Implementation of a Double‑Write Migration Strategy for the Appointment Service Using a MyBatis Plugin

The appointment service of the Vivo Game Center originally shared its database with other services, causing performance interference; to improve stability and data isolation, the team decided to migrate its tables to a dedicated database.

After reviewing common migration schemes—offline migration, online migration, and dual‑write—the team selected the dual‑write solution because the service cannot tolerate downtime, has high read/write frequency, and can accept temporary second‑level data inconsistency.

Preparation involved full data sync (MySQLDump), incremental sync (binlog), and consistency checks between old and new databases.

Code refactoring added a new data source and MyBatis mappers. A custom FullPathBeanNameGenerator was introduced to avoid bean name conflicts:

public class FullPathBeanNameGenerator implements BeanNameGenerator {
    @Override
    public String generateBeanName(BeanDefinition definition, BeanDefinitionRegistry registry) {
        return definition.getBeanClassName();
    }
}

The migration plugin intercepts MyBatis Executor methods. The interceptor definition looks like:

@Intercepts({
    @Signature(type = Executor.class, method = "update", args = {MappedStatement.class, Object.class}),
    @Signature(type = Executor.class, method = "query", args = {MappedStatement.class, Object.class, RowBounds.class, ResultHandler.class}),
    @Signature(type = Executor.class, method = "query", args = {MappedStatement.class, Object.class, RowBounds.class, ResultHandler.class, CacheKey.class, BoundSql.class})
})
public class AppointMigrateInterceptor implements Interceptor {
    @Override
    public Object intercept(Invocation invocation) throws Throwable {
        Object[] args = invocation.getArgs();
        MappedStatement ms = (MappedStatement) args[0];
        // ... plugin logic ...
    }
    // ... other methods ...
}

The plugin extracts the old mapper path, loads the corresponding new mapper class via Class.forName , opens a new SqlSession , and invokes the method reflectively. Key helper methods include:

protected Object invoke(Invocation invocation, TableConfiguration tableConfiguration) throws NoSuchMethodException, InvocationTargetException, IllegalAccessException {
    MappedStatement ms = (MappedStatement) invocation.getArgs()[0];
    Object parameter = invocation.getArgs()[1];
    Class
targetMapperClass = tableConfiguration.getTargetMapperClazz();
    SqlSession sqlSession = sqlSessionFactory.openSession();
    Object mapper = sqlSession.getMapper(targetMapperClass);
    Object[] paramValues = getParamValue(parameter);
    Method method = getMethod(ms.getId(), targetMapperClass, paramValues);
    // invoke new‑db mapper
    Object result = method.invoke(mapper, paramValues);
    sqlSession.close();
    return result;
}

private Method getMethod(String id, Class
mapperClass) throws NoSuchMethodException {
    String methodName = id.substring(id.lastIndexOf('.') + 1);
    Method method = methodCache.get(id);
    if (method == null) {
        method = findMethodByMethodSignature(mapperClass, methodName);
        methodCache.put(id, method);
    }
    return method;
}

private Method findMethodByMethodSignature(Class
mapperClass, String methodName) throws NoSuchMethodException {
    for (Method m : mapperClass.getMethods()) {
        if (m.getName().equals(methodName)) {
            return m;
        }
    }
    throw new NoSuchMethodException("No such method " + methodName + " in class " + mapperClass.getName());
}

private Object[] getParamValue(Object parameter) {
    List
paramValues = new ArrayList<>();
    if (parameter instanceof Map) {
        Map
paramMap = (Map
) parameter;
        if (paramMap.containsKey("collection")) {
            paramValues.add(paramMap.get("collection"));
        } else if (paramMap.containsKey("array")) {
            paramValues.add(paramMap.get("array"));
        } else {
            int count = 1;
            while (count <= paramMap.size() / 2) {
                try {
                    paramValues.add(paramMap.get("param" + (count++)));
                } catch (BindingException e) {
                    break;
                }
            }
        }
    } else if (parameter != null) {
        paramValues.add(parameter);
    }
    return paramValues.toArray();
}

Special attention was given to primary‑key handling: the migration ensures useGeneratedKeys is used so that IDs remain consistent between old and new tables, and batch insert ignore statements were excluded from double‑write to avoid mismatched IDs.

Transactional sections were temporarily disabled during double‑write, and asynchronous writes to the new database were examined for thread‑safety issues, such as list mutation after the old‑db write.

Switch‑over process :

Deploy the double‑write plugin while keeping reads and writes on the old database.

Run full‑sync and incremental sync tools to populate the new database.

Stop the sync tool and enable double‑write (writes go to both DBs, read‑compare switch turned on).

Run a full consistency check and enable a compare‑and‑compensate job.

Gradually shift read traffic to the new database, monitoring latency.

When confidence is high, stop double‑write, switch reads and writes fully to the new database, and run a reverse compensate job to sync any remaining data back.

Finally, remove migration code and decommission the old database.

The article concludes that each step has a rollback plan, monitoring was set up for anomalies, and the approach can be reused for other services with similar requirements.

JavaMySQLMyBatisData SynchronizationDatabase MigrationDouble Write
High Availability Architecture
Written by

High Availability Architecture

Official account for High Availability Architecture.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.