RobustDB: A Lightweight Client‑Side Read‑Write Splitting Solution for Atlas
This article introduces RobustDB, a client‑side read‑write splitting framework that replaces the problematic Atlas proxy by routing SQL statements to master or slave databases based on DML/DQL detection, using Spring, AspectJ, thread‑local context, and dynamic data‑source management.
Company DBAs complained that the Atlas proxy was hard to use and had several bugs, prompting the development of a client‑side read‑write splitting solution called RobustDB . The goal was to replace Atlas with a lightweight library that could route SQL statements to the appropriate master or slave database without modifying existing application code.
Background
Most enterprises use vertical sharding and partitioned tables to handle large data volumes, and they rely on read‑write splitting to alleviate read pressure. The existing architecture consists of a VIP layer for IP mapping and an Atlas proxy that parses SQL into DML (Data Modify Language) and DQL (Data Query Language), sending DML to the master and DQL to read replicas.
Atlas suffers from maintenance issues, lack of request‑IP mapping, coarse‑grained routing control, and connection‑refresh problems, motivating a client‑side approach.
RobustDB Core Design
The core idea is to determine the SQL type at the moment a connection is requested and set a thread‑local variable indicating whether the operation should use the master or a slave. An AspectJ interceptor reads a custom @DataSourceType annotation on service methods to force all SQL in that method to use the master.
@Aspect
@Component
public class DataSourceAspect {
@Around("execution(* *(..)) && @annotation(dataSourceType)")
public Object aroundMethod(ProceedingJoinPoint pjd, DataSourceType dataSourceType) throws Throwable {
DataSourceContextHolder.setMultiSqlDataSourceType(dataSourceType.name());
Object result = pjd.proceed();
DataSourceContextHolder.clearMultiSqlDataSourceType();
return result;
}
}The BackendConnection class extends AbstractConnectionAdapter and overrides prepareStatement to obtain a connection based on the current SQL type. It caches connections per data source key.
public final class BackendConnection extends AbstractConnectionAdapter {
private AbstractRoutingDataSource abstractRoutingDataSource;
private final Map
connectionMap = new HashMap<>();
public BackendConnection(AbstractRoutingDataSource abstractRoutingDataSource) {
this.abstractRoutingDataSource = abstractRoutingDataSource;
}
@Override
public PreparedStatement prepareStatement(String sql) throws SQLException {
return getConnectionInternal(sql).prepareStatement(sql);
}
private Connection getConnectionInternal(final String sql) throws SQLException {
if (ExecutionEventUtil.isDML(sql)) {
DataSourceContextHolder.setSingleSqlDataSourceType(DataSourceType.MASTER);
} else if (ExecutionEventUtil.isDQL(sql)) {
DataSourceContextHolder.setSingleSqlDataSourceType(DataSourceType.SLAVE);
}
Object dataSourceKey = abstractRoutingDataSource.determineCurrentLookupKey();
Optional
connectionOptional = fetchCachedConnection(dataSourceKey.toString());
if (connectionOptional.isPresent()) {
return connectionOptional.get();
}
Connection connection = abstractRoutingDataSource.getTargetDataSource(dataSourceKey).getConnection();
connection.setAutoCommit(super.getAutoCommit());
connection.setTransactionIsolation(super.getTransactionIsolation());
connectionMap.put(dataSourceKey.toString(), connection);
return connection;
}
}The AbstractRoutingDataSource (and its subclass DynamicDataSource ) implements the lookup logic. It maintains a list of slave keys weighted by configuration and selects a slave using a round‑robin counter.
public class DynamicDataSource extends AbstractRoutingDataSource implements InitializingBean {
private List
slaveDataSources = new ArrayList<>();
private Map
slaveDataSourcesWeight;
private AtomicInteger counter = new AtomicInteger(-1);
private int slaveCount = 0;
@Override
public Object determineCurrentLookupKey() {
if (DataSourceContextHolder.isSlave()) {
return getSlaveKey();
}
return "master";
}
public Object getSlaveKey() {
if (slaveCount <= 0) return null;
int index = counter.incrementAndGet() % slaveCount;
if (counter.get() > 9999) counter.set(-1);
return slaveDataSources.get(index);
}
}The DataSourceContextHolder uses TransmittableThreadLocal to propagate the master/slave flag across thread pools, preventing loss of context when asynchronous tasks are spawned.
public class DataSourceContextHolder {
private static final TransmittableThreadLocal
singleSqlContextHolder = new TransmittableThreadLocal<>();
private static final TransmittableThreadLocal
multiSqlContextHolder = new TransmittableThreadLocal<>();
public static void setSingleSqlDataSourceType(String dataSourceType) { singleSqlContextHolder.set(dataSourceType); }
public static String getSingleSqlDataSourceType() { return singleSqlContextHolder.get(); }
public static void clearSingleSqlDataSourceType() { singleSqlContextHolder.remove(); }
public static void setMultiSqlDataSourceType(String dataSourceType) { multiSqlContextHolder.set(dataSourceType); }
public static String getMultiSqlDataSourceType() { return multiSqlContextHolder.get(); }
public static void clearMultiSqlDataSourceType() { multiSqlContextHolder.remove(); }
public static boolean isSlave() {
return "slave".equals(multiSqlContextHolder.get()) || (multiSqlContextHolder.get() == null && "slave".equals(singleSqlContextHolder.get()));
}
}Dynamic Configuration
When a configuration change is received from the internal gconfig (or an open‑source alternative such as Diamond), the refreshDataSource method rebuilds the DynamicDataSource bean with new master/slave mappings and weight settings, then calls afterPropertiesSet() to re‑initialize the routing tables.
public void refreshDataSource(String properties) {
YamlDynamicDataSource dataSource;
try {
dataSource = new YamlDynamicDataSource(properties);
} catch (IOException e) {
throw new RuntimeException("convert datasource config failed!");
}
// validation omitted for brevity
DynamicDataSource dynamicDataSource = (DynamicDataSource) ((DefaultListableBeanFactory) beanFactory).getBean(dataSourceName);
dynamicDataSource.setResolvedDefaultDataSource(dataSource.getResolvedDefaultDataSource());
dynamicDataSource.setResolvedDataSources(new HashMap<>());
dataSource.getResolvedDataSources().forEach(dynamicDataSource::putNewDataSource);
dynamicDataSource.setSlaveDataSourcesWeight(dataSource.getSlaveDataSourcesWeight());
dynamicDataSource.afterPropertiesSet();
}Performance
Load tests showed that RobustDB achieved lower latency and higher throughput compared with the original Atlas proxy, confirming that client‑side routing can reduce the overhead introduced by the proxy layer.
Overall, RobustDB provides a concise, extensible, and high‑performance alternative for read‑write splitting in Java/Spring applications, with clear separation of concerns, thread‑safe context propagation, and dynamic reconfiguration capabilities.
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.