Backend Development 18 min read

Integrating Spring Boot with MyBatis, Druid, and PageHelper for Multi‑DataSource and Pagination

This article demonstrates how to combine Spring Boot with MyBatis, Druid connection pool, and PageHelper pagination plugin to configure multiple data sources, set up monitoring, and perform efficient paged queries, providing complete Maven dependencies, property settings, and Java configuration examples.

Java Captain
Java Captain
Java Captain
Integrating Spring Boot with MyBatis, Druid, and PageHelper for Multi‑DataSource and Pagination

This tutorial explains how to integrate Spring Boot with MyBatis, Druid, and PageHelper, focusing on multi‑data‑source configuration and pagination. It assumes prior knowledge of Spring Boot + MyBatis integration and concentrates on Druid usage and PageHelper setup.

Druid Introduction and Usage

Druid is a high‑performance, extensible database connection pool developed by Alibaba, widely used in production. Its core consists of a filter‑chain plugin system, DruidDataSource, and an SQL parser. Main features include monitoring, password encryption, SQL logging, and JDBC extension.

To use Druid, add the Maven dependency:

<dependency>
    <groupId>com.alibaba</groupId>
    <artifactId>druid</artifactId>
    <version>1.1.8</version>
</dependency>

Configure the data source in application.properties (or application.yml ) as follows (default and secondary data sources):

# Default data source
master.datasource.url=jdbc:mysql://localhost:3306/springBoot?useUnicode=true&characterEncoding=utf8&allowMultiQueries=true
master.datasource.username=root
master.datasource.password=123456
master.datasource.driverClassName=com.mysql.jdbc.Driver

# Secondary data source
cluster.datasource.url=jdbc:mysql://localhost:3306/springBoot_test?useUnicode=true&characterEncoding=utf8
cluster.datasource.username=root
cluster.datasource.password=123456
cluster.datasource.driverClassName=com.mysql.jdbc.Driver

# Connection pool settings
spring.datasource.type=com.alibaba.druid.pool.DruidDataSource
spring.datasource.initialSize=5
spring.datasource.minIdle=5
spring.datasource.maxActive=20
spring.datasource.maxWait=60000
spring.datasource.timeBetweenEvictionRunsMillis=60000
spring.datasource.minEvictableIdleTimeMillis=300000
spring.datasource.validationQuery=SELECT 1 FROM DUAL
spring.datasource.testWhileIdle=true
spring.datasource.testOnBorrow=false
spring.datasource.testOnReturn=false
spring.datasource.poolPreparedStatements=true
spring.datasource.maxPoolPreparedStatementPerConnectionSize=20
spring.datasource.filters=stat,wall,log4j
spring.datasource.connectionProperties=druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000

After adding the configuration, create the Druid‑related Java classes. The primary data source configuration class ( MasterDataSourceConfig ) looks like:

@Configuration
@MapperScan(basePackages = MasterDataSourceConfig.PACKAGE, sqlSessionFactoryRef = "masterSqlSessionFactory")
public class MasterDataSourceConfig {
    static final String PACKAGE = "com.pancm.dao.master";
    static final String MAPPER_LOCATION = "classpath:mapper/master/*.xml";
    @Value("${master.datasource.url}") private String url;
    @Value("${master.datasource.username}") private String username;
    @Value("${master.datasource.password}") private String password;
    @Value("${master.datasource.driverClassName}") private String driverClassName;
    // other @Value fields for pool properties omitted for brevity
    @Bean(name = "masterDataSource")
    @Primary
    public DataSource masterDataSource() {
        DruidDataSource dataSource = new DruidDataSource();
        dataSource.setUrl(url);
        dataSource.setUsername(username);
        dataSource.setPassword(password);
        dataSource.setDriverClassName(driverClassName);
        // set pool properties (initialSize, minIdle, maxActive, etc.)
        try { dataSource.setFilters(filters); } catch (SQLException e) { e.printStackTrace(); }
        dataSource.setConnectionProperties(connectionProperties);
        return dataSource;
    }
    @Bean(name = "masterTransactionManager")
    @Primary
    public DataSourceTransactionManager masterTransactionManager() {
        return new DataSourceTransactionManager(masterDataSource());
    }
    @Bean(name = "masterSqlSessionFactory")
    @Primary
    public SqlSessionFactory masterSqlSessionFactory(@Qualifier("masterDataSource") DataSource masterDataSource) throws Exception {
        SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean();
        sessionFactory.setDataSource(masterDataSource);
        sessionFactory.setMapperLocations(new PathMatchingResourcePatternResolver().getResources(MAPPER_LOCATION));
        return sessionFactory.getObject();
    }
    // @Primary annotation indicates the bean is preferred when multiple candidates exist.
}

The secondary data source ( ClusterDataSourceConfig ) is similar but without @Primary and with its own package and mapper location.

@Configuration
@MapperScan(basePackages = ClusterDataSourceConfig.PACKAGE, sqlSessionFactoryRef = "clusterSqlSessionFactory")
public class ClusterDataSourceConfig {
    static final String PACKAGE = "com.pancm.dao.cluster";
    static final String MAPPER_LOCATION = "classpath:mapper/cluster/*.xml";
    @Value("${cluster.datasource.url}") private String url;
    @Value("${cluster.datasource.username}") private String username;
    @Value("${cluster.datasource.password}") private String password;
    @Value("${cluster.datasource.driverClassName}") private String driverClass;
    @Bean(name = "clusterDataSource")
    public DataSource clusterDataSource() {
        DruidDataSource dataSource = new DruidDataSource();
        dataSource.setUrl(url);
        dataSource.setUsername(username);
        dataSource.setPassword(password);
        dataSource.setDriverClassName(driverClass);
        // pool properties omitted for brevity
        return dataSource;
    }
    @Bean(name = "clusterTransactionManager")
    public DataSourceTransactionManager clusterTransactionManager() {
        return new DataSourceTransactionManager(clusterDataSource());
    }
    @Bean(name = "clusterSqlSessionFactory")
    public SqlSessionFactory clusterSqlSessionFactory(@Qualifier("clusterDataSource") DataSource clusterDataSource) throws Exception {
        SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean();
        sessionFactory.setDataSource(clusterDataSource);
        sessionFactory.setMapperLocations(new PathMatchingResourcePatternResolver().getResources(MAPPER_LOCATION));
        return sessionFactory.getObject();
    }
}

After configuring both data sources, you can start the application and test CRUD operations on two tables ( t_user in the default database and t_student in the secondary database). Example HTTP requests:

POST http://localhost:8084/api/user
{ "name": "张三", "age": 25 }
POST http://localhost:8084/api/student
{ "name": "学生A", "age": 16 }

GET requests retrieve specific records, e.g., GET http://localhost:8084/api/user?name=李四 returns the matching user.

PageHelper Pagination Implementation

PageHelper is a MyBatis pagination plugin. Add the Maven dependency:

<dependency>
    <groupId>com.github.pagehelper</groupId>
    <artifactId>pagehelper-spring-boot-starter</artifactId>
    <version>1.2.3</version>
</dependency>

Configuration can be placed in application.properties :

pagehelper.helperDialect=mysql
pagehelper.offsetAsPageNum=true
pagehelper.rowBoundsWithCount=true
pagehelper.reasonable=false

Or defined programmatically in a @Bean:

@Bean
public PageHelper pageHelper() {
    PageHelper pageHelper = new PageHelper();
    Properties p = new Properties();
    p.setProperty("helperDialect", "mysql");
    p.setProperty("offsetAsPageNum", "true");
    p.setProperty("rowBoundsWithCount", "true");
    p.setProperty("reasonable", "false");
    pageHelper.setProperties(p);
    return pageHelper;
}

When using multiple data sources, add the pagination interceptor to the SqlSessionFactory of the primary source:

@Bean(name = "masterSqlSessionFactory")
@Primary
public SqlSessionFactory masterSqlSessionFactory(@Qualifier("masterDataSource") DataSource masterDataSource) throws Exception {
    SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean();
    sessionFactory.setDataSource(masterDataSource);
    sessionFactory.setMapperLocations(new PathMatchingResourcePatternResolver().getResources(MasterDataSourceConfig.MAPPER_LOCATION));
    // pagination plugin
    Interceptor interceptor = new PageInterceptor();
    Properties props = new Properties();
    props.setProperty("helperDialect", "mysql");
    props.setProperty("offsetAsPageNum", "true");
    props.setProperty("rowBoundsWithCount", "true");
    props.setProperty("reasonable", "false");
    interceptor.setProperties(props);
    sessionFactory.setPlugins(new Interceptor[] {interceptor});
    return sessionFactory.getObject();
}

In service code, invoke pagination before the query:

public List
findByListEntity(T entity) {
    List
list = null;
    try {
        Page
page = PageHelper.startPage(1, 2);
        System.out.println(getClassName(entity) + "设置第一页两条数据!");
        list = getMapper().findByListEntity(entity);
        System.out.println("总共有:" + page.getTotal() + "条数据,实际返回:" + list.size() + "两条数据!");
    } catch (Exception e) {
        logger.error("查询" + getClassName(entity) + "失败!原因是:", e);
    }
    return list;
}

Running the application and calling the endpoints shows paginated results for both t_user and t_student , and the console logs display the generated SQL with pagination limits. The Druid web console (accessed at http://127.0.0.1:8084/druid/index.html ) provides clear monitoring of SQL execution, connection pool statistics, and slow‑query logs.

Conclusion

The article walks through configuring Druid as a monitoring‑enabled connection pool, setting up two independent data sources in Spring Boot, and integrating PageHelper for convenient pagination. While the example is simple, the same patterns can be extended to more complex enterprise applications.

JavaSpring BootMyBatisPageHelperDruidmulti-datasource
Java Captain
Written by

Java Captain

Focused on Java technologies: SSM, the Spring ecosystem, microservices, MySQL, MyCat, clustering, distributed systems, middleware, Linux, networking, multithreading; occasionally covers DevOps tools like Jenkins, Nexus, Docker, ELK; shares practical tech insights and is dedicated to full‑stack Java development.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.