Dynamic Multi-DataSource Management and Transaction Handling in Spring
This article explains how to dynamically manage multiple data sources and ensure transaction consistency across a master and several application databases in Spring by extending AbstractRoutingDataSource, using configuration‑file and database‑table solutions, and implementing a custom multi‑database transaction manager.
1. Background
In a system with one master database and N application databases, both need to be accessed simultaneously, raising two problems: dynamic management and switching of multiple data sources, and ensuring data consistency (transactions) across them.
2. DataSource Switching Principle
By extending Spring's AbstractRoutingDataSource , dynamic routing can be achieved. The class contains targetDataSources and defaultTargetDataSource for configuration, and during bean creation afterPropertiesSet copies the target data sources, making further addition impossible at runtime.
The abstract method determineCurrentLookupKey must be overridden to return a lookup key (usually stored in thread‑local context) which is used to fetch the appropriate DataSource from resolvedDataSources .
3. Configuration‑File Solution
Define a DynamicDataSource class that extends AbstractRoutingDataSource and overrides determineCurrentLookupKey . Configure multiple data sources in application.yml (or properties) and inject them into targetDataSources and defaultTargetDataSource via afterPropertiesSet . The getConnection call triggers determineTargetDataSource which returns the selected DataSource .
Example configuration and bean definitions are shown below:
spring.datasource.type=com.alibaba.druid.pool.DruidDataSource
spring.datasource.driverClassName=com.mysql.jdbc.Driver
# master datasource
spring.datasource.druid.master.url=jdbcUrl
spring.datasource.druid.master.username=***
spring.datasource.druid.master.password=***
# secondary datasource
spring.datasource.druid.second.url=jdbcUrl
spring.datasource.druid.second.username=***
spring.datasource.druid.second.password=*** @Configuration
public class DynamicDataSourceConfig {
@Bean
@ConfigurationProperties("spring.datasource.druid.master")
public DataSource firstDataSource() {
return DruidDataSourceBuilder.create().build();
}
@Bean
@ConfigurationProperties("spring.datasource.druid.second")
public DataSource secondDataSource() {
return DruidDataSourceBuilder.create().build();
}
@Bean
@Primary
public DynamicDataSource dataSource(DataSource firstDataSource, DataSource secondDataSource) {
Map
targetDataSources = new HashMap<>(5);
targetDataSources.put(DataSourceNames.FIRST, firstDataSource);
targetDataSources.put(DataSourceNames.SECOND, secondDataSource);
return new DynamicDataSource(firstDataSource, targetDataSources);
}
}3.1 AOP Handling
Use a DataSourceAspect and a custom @SwitchDataSource annotation to change the lookup key in thread‑local storage before method execution.
@Documented
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.METHOD})
public @interface SwitchDataSource {
String value();
}3.2 Limitations
The AbstractRoutingDataSource approach cannot add or remove data sources after the bean is created, which is unsuitable for scenarios requiring dynamic data‑source management.
4. Database‑Table Solution
Store data‑source definitions in a database table and load them at startup, allowing runtime addition, deletion, and status monitoring. Define a DataSourceManager interface for CRUD operations on the table.
public interface DataSourceManager {
void put(String var1, DataSource var2);
DataSource get(String var1);
Boolean hasDataSource(String var1);
void remove(String var1);
void closeDataSource(String var1);
Collection
all();
}4.1 Custom DataSource Implementation
The custom DynamicDataSource implements DataSourceManager and loads entries from the table into the routing map, overcoming the static‑configuration limitation.
5. Multi‑Database Transaction Management
Spring’s DataSourceTransactionManager works per data source, so switching data sources inside a transaction does not propagate the transaction. To achieve consistency across multiple databases, a custom transaction manager wraps the JDBC Connection and suppresses its commit and close methods.
public class ConnectionProxy implements Connection {
private final Connection connection;
@Override
public void commit() throws SQLException {
// suppressed
}
public void realCommit() throws SQLException {
connection.commit();
}
@Override
public void close() throws SQLException {
// suppressed
}
public void realClose() throws SQLException {
if (!connection.getAutoCommit()) {
connection.setAutoCommit(true);
}
connection.close();
}
@Override
public void rollback() throws SQLException {
if (!connection.isClosed())
connection.rollback();
}
// other methods omitted
}A TransactionHolder keeps thread‑local stacks of transaction IDs, lookup keys, and connection proxies, enabling nested transactions and proper commit/rollback at the outermost level.
public class TransactionHolder {
private boolean isOpen;
private boolean readOnly;
private IsolationLevel isolationLevel;
private ConcurrentHashMap
connectionMap;
private Stack
executeStack;
private Stack
datasourceKeyStack;
private String mainTransactionId;
private AtomicInteger transCount;
private ConcurrentHashMap
executeIdDatasourceKeyMap;
}5.1 AOP for Multi‑Transaction
The MultiTransactionAop aspect intercepts methods annotated with @MultiTransaction , creates a transaction context, switches the data source, and ensures that commit or rollback is performed only when the outermost transaction finishes.
@Aspect
@Component
@Order(99999)
public class MultiTransactionAop {
@Pointcut("@annotation(com.github.mtxn.transaction.annotation.MultiTransaction)")
public void pointcut() {}
@Around("pointcut()")
public Object aroundTransaction(ProceedingJoinPoint point) throws Throwable {
// omitted for brevity
}
}6. Summary
The article presents two solutions for dynamic multi‑data‑source management in Spring: a static configuration approach using AbstractRoutingDataSource and a flexible database‑table approach combined with a custom transaction manager that can coordinate commits across several databases within a single JVM. The latter is suitable for monolithic applications, while distributed systems should adopt a dedicated distributed‑transaction framework such as Seata.
Selected Java Interview Questions
A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.