PowerJob: A Next‑Generation Distributed Task Scheduling and Computing Framework – Introduction and Quick‑Start Guide
PowerJob is a third‑generation distributed job scheduler that adds workflow orchestration, map‑reduce style computation and rich execution modes to traditional CRON‑based scheduling, and this guide explains its advantages, core features, architecture, and provides step‑by‑step instructions with code samples to get started quickly.
PowerJob is a new‑generation distributed task scheduling and computing framework that supports CRON, API, fixed‑frequency and fixed‑delay strategies, and provides a visual workflow engine for defining task dependencies, enabling easy job scheduling and complex distributed computation.
Why Choose PowerJob?
Existing popular schedulers such as Quartz, elastic‑job and XXL‑Job have notable drawbacks: Quartz lacks a web UI, only runs on a single node and cannot leverage cluster resources; XXL‑Job improves on Quartz but still suffers from limited database support (only MySQL), static sharding, and no workflow capabilities.
PowerJob addresses these issues by offering multi‑node execution, support for multiple relational databases, dynamic sharding, and a DAG‑based workflow system.
Main Features
Simple usage with a front‑end web console for task management, monitoring and log viewing.
Comprehensive timing strategies: CRON, fixed frequency, fixed delay and API.
Rich execution modes: single‑node, broadcast, Map and MapReduce, allowing a few lines of code to harness cluster‑wide computation.
DAG workflow support for visual task orchestration and data passing between upstream and downstream tasks.
Broad executor support: Spring Bean, built‑in/externally provided Java classes, Shell, Python, etc.
Operational convenience with real‑time log streaming and minimal dependencies (only relational databases such as MySQL, PostgreSQL, Oracle, SQL Server).
High availability and performance through lock‑free scheduling and horizontal scaling of multiple server instances.
Fault tolerance with configurable retry policies.
Quick Start
1. Clone the project:
git clone https://github.com/KFCFans/PowerJob.git2. Import the source into an IDE. Start the scheduling server ( powerjob-server ) and edit the sample project ( powerjob-worker-samples ) to implement your own processor.
3. Create the database powerjob-daily and modify the configuration file to set the JDBC URL, username, password (or MongoDB URI if preferred):
spring.datasource.core.jdbc-url=jdbc:mysql://remotehost:3306/powerjob-daily?useUnicode=true&characterEncoding=UTF-8
spring.datasource.core.username=root
spring.datasource.core.password=No1Bug2Please3!4. Launch the server by running the main class:
com.github.kfcfans.powerjob.server.OhMyApplicationAfter a successful start, access http://127.0.0.1:7700/ to see the web UI.
5. Register an application in the UI (e.g., oms-test ) and note the password for console access.
6. In the sample worker project, modify the configuration class to use the registered app name:
@Configuration
public class OhMySchedulerConfig {
@Bean
public OhMyWorker initOMS() throws Exception {
List
serverAddress = Lists.newArrayList("127.0.0.1:7700");
OhMyConfig config = new OhMyConfig();
config.setPort(27777);
config.setAppName("oms-test");
config.setServerAddress(serverAddress);
config.setStoreStrategy(StoreStrategy.MEMORY);
OhMyWorker ohMyWorker = new OhMyWorker();
ohMyWorker.setConfig(config);
return ohMyWorker;
}
}7. Implement a simple processor by extending BasicProcessor :
@Slf4j
@Component
public class StandaloneProcessorDemo implements BasicProcessor {
@Override
public ProcessResult process(TaskContext context) throws Exception {
OmsLogger omsLogger = context.getOmsLogger();
omsLogger.info("StandaloneProcessorDemo start process, context is {}.", context);
System.out.println("jobParams is " + context.getJobParams());
return new ProcessResult(true, "process successfully~");
}
}8. Run the sample application:
com.github.kfcfans.powerjob.samples.SampleApplication9. Return to the web console to create a new task, configure its parameters, and either wait for the schedule or click “Run” to execute immediately. Monitor the task status and logs via the UI.
The tutorial ends here; further advanced features such as complex workflows, MapReduce, and containerized execution are documented in the official documentation.
Architecture Digest
Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.