Backend Development 19 min read

Practical Guide to Using XXL‑JOB for Distributed Task Scheduling with Spring Boot and Docker

This article explains what XXL‑JOB is, why a distributed task‑scheduling platform is needed, and provides a step‑by‑step tutorial—including Docker deployment, Spring Boot integration, sharding, idempotency handling, and a video‑transcoding use case—to help developers implement reliable distributed jobs in Java applications.

Code Ape Tech Column
Code Ape Tech Column
Code Ape Tech Column
Practical Guide to Using XXL‑JOB for Distributed Task Scheduling with Spring Boot and Docker

XXL‑JOB is an open‑source distributed task‑scheduling platform designed for rapid development, simplicity, lightweight operation, and easy extensibility. It separates the scheduling center from the executor, allowing tasks (JobHandlers) to be decoupled from scheduling logic, which improves system stability and scalability.

Typical scheduling scenarios such as daily data backup, pre‑heat operations before an event, or retrying failed MQ messages can be handled by a scheduler that automatically triggers tasks at predefined times.

In a monolithic architecture, developers often use @EnableScheduling and @Scheduled annotations, but these approaches do not scale well in clustered environments. Distributed systems face challenges like duplicate execution, task loss on node failure, elastic scaling, and unified monitoring, which necessitate a dedicated scheduling platform such as XXL‑JOB, Quartz, or Elastic‑Job.

Docker installation of XXL‑JOB (admin component) :

docker pull xuxueli/xxl-job-admin:2.3.1
mkdir -p -m 777 /mydata/xxl-job/data/applogs

After creating /mydata/xxl-job/application.properties (omitted for brevity) and importing the provided tables_xxl-job.sql into the configured database, start the container:

docker run -p 8088:8088 \
  -d --name=xxl-job-admin --restart=always \
  -v /mydata/xxl-job/application.properties:/application.properties \
  -v /mydata/xxl-job/data/applogs:/data/applogs \
  -e PARAMS='--spring.config.location=/application.properties' \
  xuxueli/xxl-job-admin:2.3.1

Access the admin UI at http:// :8088/xxl-job-admin/ (default credentials: admin / 123456).

Spring Boot integration requires adding the Maven dependency:

<dependency>
    <groupId>com.xuxueli</groupId>
    <artifactId>xxl-job-core</artifactId>
    <version>2.3.1</version>
</dependency>

Configure application.yml with the admin address, executor name, IP, port, log path, and access token. Then create a configuration class that builds an XxlJobSpringExecutor bean using the injected properties.

Define custom jobs using the @XxlJob annotation. For example, a simple test job:

@Component
public class TestJob {
    @XxlJob("testHandler")
    public void testHandler() {
        XxlJobHelper.handleSuccess("Task executed successfully");
    }
}

When adding a job in the admin UI, ensure the Cron expression and the JobHandler value match the annotation.

Sharding and idempotency are handled via the built‑in methods XxlJobHelper.getShardIndex() and XxlJobHelper.getShardTotal() . Jobs can retrieve a subset of records using a modulo operation on the primary key, ensuring each executor processes a distinct slice.

int shardIndex = XxlJobHelper.getShardIndex();
int shardTotal = XxlJobHelper.getShardTotal();
SELECT * FROM media_process m
WHERE m.id % #{shardTotal} = #{shardIndex}
  AND (m.status = '1' OR m.status = '3')
  AND m.fail_count < 3
LIMIT #{count};

To guarantee idempotent execution, the article uses an optimistic‑lock update on the task status:

UPDATE media_process m
SET m.status = '2'
WHERE (m.status = '1' OR m.status = '3')
  AND m.fail_count < 3
  AND m.id = #{id};

The job then follows these steps: download the source video from MinIO, transcode it to MP4, upload the result back to MinIO, and finally update the task status to success. A CountDownLatch ensures the method blocks until all parallel subtasks finish (or timeout after 30 minutes).

Additional scheduled jobs clean up completed records, move them to a history table, and implement a compensation mechanism for tasks that remain in the “processing” state for too long or exceed the maximum retry count.

Testing involves launching three media‑processing nodes, triggering the jobs, and inspecting logs via the XXL‑JOB UI. The logs show detailed execution flow, including download failures, transcoding results, and status updates.

The article concludes that the presented solution successfully demonstrates how to deploy XXL‑JOB in Docker, integrate it with Spring Boot, and apply sharding, concurrency, and idempotency techniques to build a robust distributed video‑transcoding pipeline.

JavaDockerdistributed schedulingSpring Bootxxl-jobvideo transcoding
Code Ape Tech Column
Written by

Code Ape Tech Column

Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.