Backend Development 21 min read

Master Distributed Task Scheduling with XXL-JOB: Docker Setup to Spring Boot Integration

This comprehensive guide explains what XXL-JOB is, why distributed task scheduling is needed, and walks through installing XXL-JOB with Docker, configuring it in a Spring Boot project, creating custom jobs, handling sharding, ensuring idempotency, and monitoring execution, all illustrated with code and diagrams.

macrozheng
macrozheng
macrozheng
Master Distributed Task Scheduling with XXL-JOB: Docker Setup to Spring Boot Integration

Preface

This article records practical experiences with XXL-JOB, explaining what it is, how to use it, and presenting a real‑world case study.

What is XXL-JOB?

XXL-JOB is a distributed task scheduling platform designed for rapid development, simplicity, lightweight operation, and easy extensibility. Its core idea is to separate the scheduling center from business logic: the center issues scheduling requests, while executors receive those requests and run the tasks, which are abstracted as JobHandlers.

What is Task Scheduling?

Task scheduling automatically executes specified tasks at predetermined times, solving scenarios such as daily data backup at midnight, pre‑warming business logic before an event, and retrying failed MQ messages.

In monolithic systems, scheduling can be implemented with threads,

@EnableScheduling

+

@Scheduled

, etc. In distributed systems, a dedicated scheduling platform is required.

Why Use a Distributed Scheduling Platform?

In a cluster, each service can run a scheduler, but this raises problems: controlling duplicate execution, handling node failures, scaling instances, and unified monitoring. A platform like XXL-JOB provides high availability, fault tolerance, and load balancing.

Other popular distributed schedulers include Quartz and elastic‑job.

How to Use XXL-JOB

Install XXL-JOB with Docker

Pull the Docker image (version 2.3.1):

<code>docker pull xuxueli/xxl-job-admin:2.3.1</code>

Create a data directory:

<code>mkdir -p -m 777 /mydata/xxl-job/data/applogs</code>

Create

application.properties

under

/mydata/xxl-job

(configure database, ports, alarm email, access token, etc.).

Import

tables_xxl-job.sql

into the configured database.

Run the container (adjust port mapping if you changed the port in

application.properties

):

<code>docker run -p 8088:8088 \
  -d --name=xxl-job-admin --restart=always \
  -v /mydata/xxl-job/application.properties:/application.properties \
  -v /mydata/xxl-job/data/applogs:/data/applogs \
  -e PARAMS='--spring.config.location=/application.properties' \
  xuxueli/xxl-job-admin:2.3.1</code>

Verify the container with

docker ps

and view logs via

docker logs xxl-job-admin

if needed.

Access the admin console at

http://<your_ip>:8088/xxl-job-admin/

(default credentials: admin / 123456).

At this point the XXL-JOB scheduling center is up and running.

Integrate XXL-JOB into a Spring Boot Project

Add Maven dependency:

<code>&lt;dependency&gt;
  &lt;groupId&gt;com.xuxueli&lt;/groupId&gt;
  &lt;artifactId&gt;xxl-job-core&lt;/artifactId&gt;
  &lt;version&gt;2.3.1&lt;/version&gt;
&lt;/dependency&gt;</code>

Configure

application.yml

with admin address, access token, executor name, IP, port, log path, and retention days.

<code>xxl:
  job:
    admin:
      addresses: http://192.168.101.25:8088/xxl-job-admin
    executor:
      appname: media-process-service
      address:
      ip:
      port: 9999
      logpath: /data/applogs/xxl-job/jobhandler
      logretentiondays: 30
    accessToken: default_token</code>

Create a configuration class that builds

XxlJobSpringExecutor

using the properties above.

Add an Executor

In the admin console, add a new executor and configure automatic registration, name, and routing strategy (e.g., Sharding Broadcast).

Define Custom Jobs (Bean Mode)

Annotate a method with

@XxlJob("testHandler")

to create a job handler. Use

XxlJobHelper.log

for logging and

XxlJobHelper.handleSuccess/handleFail

to set execution results.

<code>@Component
public class TestJob {
    @XxlJob("testHandler")
    public void testHandler() {
        XxlJobHelper.handleSuccess("Task executed successfully");
    }
}</code>

Create a Real‑World Job: Video Transcoding

The case study processes video files stored in MinIO. It demonstrates sharding broadcast, optimistic‑lock based idempotency, multi‑threaded execution, and status updates.

Obtain shard index and total via

XxlJobHelper.getShardIndex()

and

XxlJobHelper.getShardTotal()

.

Query tasks with a modulo operation on the task ID to distribute work across executors.

Use a

CountDownLatch

to wait for all parallel tasks to finish (timeout 30 minutes).

Before processing each task, acquire it with an optimistic‑lock update (status changes from 1 or 3 to 2).

Download the source video from MinIO, transcode to MP4, upload the result back to MinIO, and finally set the task status to success.

Configure XXL‑JOB’s expiration strategy to “ignore” and blocking strategy to “discard later” to guarantee idempotency.

<code>@XxlJob("videoTranscodingHandler")
public void videoTranscodingHandler() throws InterruptedException {
    List<MediaProcess> list = mediaProcessService.getMediaProcessList(shardIndex, shardTotal, count);
    CountDownLatch latch = new CountDownLatch(list.size());
    list.forEach(mp -> executor.execute(() -> {
        try {
            boolean acquired = mediaProcessService.startTask(mp.getId());
            if (!acquired) { XxlJobHelper.log("Task acquisition failed, id {}", mp.getId()); return; }
            File file = mediaFileService.downloadFileFromMinIO(mp.getBucket(), mp.getObjectName());
            if (file == null) { XxlJobHelper.log("Download failed, id {}", mp.getId());
                mediaProcessService.saveProcessFinishStatus(mp.getId(), MediaProcessCode.FAIL.getValue(), null, "Download error");
                return; }
            String result = videoUtil.generateMp4();
            if (!"success".equals(result)) {
                XxlJobHelper.log("Transcoding failed, id {}", mp.getId());
                mediaProcessService.saveProcessFinishStatus(mp.getId(), MediaProcessCode.FAIL.getValue(), null, "Transcoding error");
                return; }
            boolean uploaded = mediaFileService.addMediaFilesToMinIO(newFile.getAbsolutePath(), "video/mp4", mp.getBucket(), mp.getObjectNameMp4());
            if (!uploaded) {
                XxlJobHelper.log("Upload failed, id {}", mp.getId());
                mediaProcessService.saveProcessFinishStatus(mp.getId(), MediaProcessCode.FAIL.getValue(), null, "Upload error");
                return; }
            mediaProcessService.saveProcessFinishStatus(mp.getId(), MediaProcessCode.SUCCESS.getValue(), fileId, url, "Success");
        } finally { latch.countDown(); }
    }));
    latch.await(30, TimeUnit.MINUTES);
}
</code>

Additional Maintenance Jobs

Periodically clean up successfully processed records and move them to a history table.

Compensation job scans for tasks stuck in “processing” for over 30 minutes or with failure count > 3, resetting their status or flagging them for manual handling.

The tutorial demonstrates the complete lifecycle of integrating XXL‑JOB into a Spring Boot microservice, handling sharding, idempotency, logging, and monitoring.

Dockerdistributed schedulingmicroservicesSpring Bootxxl-job
macrozheng
Written by

macrozheng

Dedicated to Java tech sharing and dissecting top open-source projects. Topics include Spring Boot, Spring Cloud, Docker, Kubernetes and more. Author’s GitHub project “mall” has 50K+ stars.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.