Backend Development 21 min read

Using XXL-JOB for Distributed Task Scheduling in Spring Boot: Installation, Configuration, and Real-World Video Transcoding Case

This article introduces the open‑source XXL‑JOB distributed task scheduling platform, explains how to install it via Docker, configure it in a Spring Boot application, and demonstrates a real‑world video‑transcoding use case with sharding, idempotency, and task‑status management.

Selected Java Interview Questions
Selected Java Interview Questions
Selected Java Interview Questions
Using XXL-JOB for Distributed Task Scheduling in Spring Boot: Installation, Configuration, and Real-World Video Transcoding Case

Introduction

The article records a practical experience with XXL‑JOB, a lightweight, extensible distributed task scheduling platform. It explains what XXL‑JOB is, why a distributed scheduler is needed, and presents a complete end‑to‑end example.

What is XXL‑JOB?

XXL‑JOB is a distributed task scheduling platform whose core design goals are rapid development, simplicity, lightweight, and easy extensibility. It separates the scheduling center (which issues scheduling requests) from executors (which run the actual jobs), achieving decoupling between scheduling and business logic.

Why a Distributed Scheduler? In a clustered environment a single‑node scheduler suffers from duplicate execution, task loss on node failure, lack of elastic scaling, and difficulty monitoring execution. A distributed scheduler provides high availability, fault tolerance, and load balancing.

Installation via Docker

docker pull xuxueli/xxl-job-admin:2.3.1
mkdir -p -m 777 /mydata/xxl-job/data/applogs

After creating /mydata/xxl-job/application.properties (omitted for brevity) and preparing the SQL schema, run the container:

docker run  -p 8088:8088 \
  -d --name=xxl-job-admin --restart=always \
  -v /mydata/xxl-job/application.properties:/application.properties \
  -v /mydata/xxl-job/data/applogs:/data/applogs \
  -e PARAMS='--spring.config.location=/application.properties' \
  xuxueli/xxl-job-admin:2.3.1

Verify with docker ps and access the UI at http:// host :8088/xxl-job-admin/ (default credentials: admin / 123456).

Spring Boot Integration

Add the Maven dependency:

<dependency>
    <groupId>com.xuxueli</groupId>
    <artifactId>xxl-job-core</artifactId>
    <version>2.3.1</version>
</dependency>

Configure application.yml with the admin address, executor name, IP, port, log path, and access token:

xxl:
  job:
    admin:
      addresses: http://192.168.101.25:8088/xxl-job-admin
    executor:
      appname: media-process-service
      address:
      ip:
      port: 9999
      logpath: /data/applogs/xxl-job/jobhandler
      logretentiondays: 30
    accessToken: default_token

Create a configuration class to instantiate XxlJobSpringExecutor :

/**
 * XXL‑JOB configuration class
 */
@Slf4j
@Configuration
public class XxlJobConfig {
    @Value("${xxl.job.admin.addresses}")
    private String adminAddresses;
    @Value("${xxl.job.accessToken}")
    private String accessToken;
    @Value("${xxl.job.executor.appname}")
    private String appname;
    @Value("${xxl.job.executor.address}")
    private String address;
    @Value("${xxl.job.executor.ip}")
    private String ip;
    @Value("${xxl.job.executor.port}")
    private int port;
    @Value("${xxl.job.executor.logpath}")
    private String logPath;
    @Value("${xxl.job.executor.logretentiondays}")
    private int logRetentionDays;

    @Bean
    public XxlJobSpringExecutor xxlJobExecutor() {
        log.info(">>>>>>>>>>> xxl‑job config init.");
        XxlJobSpringExecutor executor = new XxlJobSpringExecutor();
        executor.setAdminAddresses(adminAddresses);
        executor.setAppname(appname);
        executor.setAddress(address);
        executor.setIp(ip);
        executor.setPort(port);
        executor.setAccessToken(accessToken);
        executor.setLogPath(logPath);
        executor.setLogRetentionDays(logRetentionDays);
        return executor;
    }
}

In the XXL‑JOB admin console, add a new executor with the same AppName and configure the registration mode (automatic registration is recommended).

Defining a Custom Job

Using the Bean‑mode (method‑level) annotation:

@Component
public class TestJob {
    @XxlJob("testHandler")
    public void testHandler() {
        XxlJobHelper.handleSuccess("Test job executed successfully");
    }
}

The annotation can also specify init and destroy methods, and logging is done via XxlJobHelper.log , handleSuccess , or handleFail .

Sharding and Idempotency

When the executor is deployed in a cluster, choose the sharding broadcast routing strategy. Each executor obtains its shard index and total via:

int shardIndex = XxlJobHelper.getShardIndex();
int shardTotal = XxlJobHelper.getShardTotal();

Tasks are fetched with a modulo query, e.g.:

select * from media_process m
where m.id % #{shareTotal} = #{shareIndex}
  and (m.status = '1' or m.status = '3')
  and m.fail_count < 3
limit #{count}

Idempotency is ensured by optimistic‑lock updates on the task status:

update media_process m
set m.status = '2'
where (m.status = '1' or m.status = '3')
  and m.fail_count < 3
  and m.id = #{id}

Additional XXL‑JOB settings such as “ignore” for expired triggers and “discard later” for blocking strategies further guarantee single execution.

Real‑World Video‑Transcoding Use Case

The project processes videos stored in MinIO. A sharding‑broadcast job fetches a slice of pending records, downloads each video, transcodes it to MP4, uploads the result back to MinIO, and updates the task status. The core logic uses a CountDownLatch to wait for all parallel subtasks (max 30 minutes).

/**
 * Video transcoding handler
 */
@XxlJob("videoTranscodingHandler")
public void videoTranscodingHandler() throws InterruptedException {
    List
list = mediaProcessService.getMediaProcessList(shardIndex, shardTotal, count);
    CountDownLatch latch = new CountDownLatch(list.size());
    list.forEach(media -> {
        executor.execute(() -> {
            try {
                boolean locked = mediaProcessService.startTask(media.getId());
                if (!locked) {
                    XxlJobHelper.log("Task lock failed, id {}", media.getId());
                    return;
                }
                File file = mediaFileService.downloadFileFromMinIO(media.getBucket(), media.getObjectName());
                if (file == null) {
                    XxlJobHelper.log("Download failed, id {}", media.getId());
                    mediaProcessService.saveProcessFinishStatus(media.getId(), FAIL, null, null, "Download error");
                    return;
                }
                String result = videoUtil.generateMp4();
                if (!"success".equals(result)) {
                    XxlJobHelper.log("Transcode failed, id {}", media.getId());
                    mediaProcessService.saveProcessFinishStatus(media.getId(), FAIL, null, null, "Transcode error");
                    return;
                }
                boolean uploaded = mediaFileService.addMediaFilesToMinIO(newFile.getAbsolutePath(), "video/mp4", media.getBucket(), media.getObjectNameMp4());
                if (!uploaded) {
                    XxlJobHelper.log("Upload failed, id {}", media.getId());
                    mediaProcessService.saveProcessFinishStatus(media.getId(), FAIL, null, null, "Upload error");
                    return;
                }
                mediaProcessService.saveProcessFinishStatus(media.getId(), SUCCESS, fileId, url, "Done");
            } finally {
                latch.countDown();
            }
        });
    });
    latch.await(30, TimeUnit.MINUTES);
}

Additional scheduled jobs clean up completed records, move them to a history table, and implement a compensation mechanism for tasks that remain in the “processing” state for too long or exceed three failure attempts.

Testing and Observability

After deploying three media‑processing nodes, the author shows UI screenshots of task logs, demonstrates failure scenarios (e.g., wrong download path), and verifies database state changes, confirming that the distributed scheduling works as intended.

References

Official XXL‑JOB repository (GitHub, Gitee), documentation site, and an article on API idempotency are cited.

Dockerdistributed schedulingShardingSpring BootIdempotencyxxl-jobvideo transcoding
Selected Java Interview Questions
Written by

Selected Java Interview Questions

A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.