Design and Implementation of a Task Orchestration Framework for Business Systems
This article introduces the background, core concepts, architecture, and implementation details of a lightweight Java task orchestration framework that simplifies complex business workflows by managing task dependencies, parallel execution, and error handling, thereby improving development efficiency and system maintainability.
This article presents a task orchestration framework used in an advertising business system, describing the scenarios it solves, the core ideas of the orchestration model, and a basic implementation that helps developers focus on business logic while improving development efficiency and code quality.
1. Background
The advertising engine receives a user request that passes through multiple verification and data processing steps (anti‑fraud, request coloring, tokenization, IP segmentation, user profiling, etc.). The first step is a prerequisite, while the remaining four steps can run in parallel and have no inter‑dependencies. After all steps complete, their results are aggregated.
In more complex scenarios, such as aggregating data for a product detail page, multiple independent asynchronous tasks must be executed and then combined. The article shows a Java example that uses custom thread pools, CompletableFuture , and CountDownLatch to run these tasks concurrently.
// Parallel processing tasks for Product (e.g., TaskA list)
@Resource
List<ProductTask> productTasks;
// Tasks that depend on Item (e.g., TaskB list)
@Resource
List<ItemTask> itemTasks;
public void testFuture(HttpServletRequest httpServletRequest) {
DataContext dataContext = new DataContext();
dataContext.setHttpServletRequest(httpServletRequest);
// First parallel phase
List
product = new ArrayList<>();
for (Task asyncTask : productTasks) {
Future
submit = threadPoolExecutor.submit(() -> {
asyncTask.task(dataContext, null);
});
product.add(submit);
}
for (Future future : product) {
try { future.get(); } catch (ExecutionException e) { e.printStackTrace(); }
}
// Second parallel phase
List
item = new ArrayList<>();
for (Task asyncTask : itemTasks) {
Future
submit = threadPoolExecutor.submit(() -> {
asyncTask.task(dataContext, null);
});
item.add(submit);
}
for (Future future : item) {
try { future.get(); } catch (ExecutionException e) { e.printStackTrace(); }
}
// After both phases finish, aggregate results
}Problems Identified
When the number of parallel branches grows (e.g., three or four concurrent flows) or when tasks have hierarchical dependencies, the traditional multithreaded code becomes bulky and hard to maintain.
Separate code blocks for each branch increase duplication and obscure the overall workflow, making debugging and future adjustments difficult.
To address these issues, the article proposes a dedicated orchestration layer that abstracts dependency management, execution order, and error handling.
2. Introduction to Task Orchestration
Task orchestration is a technique that arranges multiple atomic tasks according to their dependencies and execution order, forming a workflow. It simplifies complex business scenarios where tasks must be executed sequentially, in parallel, or a mixture of both, and provides a clear API for describing the workflow.
Compared with raw multithreading, orchestration offers easier scheduling, clearer dependency representation, and better maintainability.
3. Core Design of the Framework
The framework consists of five main components: Task Scheduler (entry point), Workflow Parser (stores dependency graph), Workflow Processor (executes nodes), Task Bus (context carrier), and Business Task Nodes (actual business logic).
3.1 Overall Architecture
The diagram shows how the scheduler triggers the parser, which builds a DAG of SchedulingNode objects. The processor traverses the DAG, submits each node to a thread pool, and monitors state transitions. Results are collected in a concurrent map.
3.2 Task Scheduler (Entry)
public class TaskScheduling {
private static ExecutorService executorService;
/**
* Starts the orchestration.
* @param taskContext execution context containing parameters
* @param executorService user‑provided thread pool
* @param nodes list of pre‑configured nodes (top nodes can be passed)
* @param timeout overall timeout in milliseconds
* @return map of node names to their results
*/
public static Map
start(TaskContext taskContext,
ExecutorService executorService,
List
nodes,
long timeout) {
ConcurrentHashMap
resultNode = new ConcurrentHashMap<>();
TaskScheduling.executorService = executorService;
CompletableFuture[] count = new CompletableFuture[nodes.size()];
for (int i = 0; i < nodes.size(); i++) {
SchedulingNode node = nodes.get(i);
count[i] = CompletableFuture.runAsync(() -> {
node.execute(executorService, node, timeout, resultNode, taskContext);
}, executorService);
}
try { CompletableFuture.allOf(count).get(timeout, TimeUnit.MILLISECONDS); }
catch (Exception e) { /* handle timeout or other exceptions */ }
return resultNode;
}
}3.3 Workflow Parser
The parser can store the DAG using adjacency matrices or adjacency lists. The article demonstrates a simple parent‑child list implementation where each SchedulingNode keeps fatherHandler (parents) and sonHandler (children) lists.
protected List
fatherHandler = new ArrayList<>();
protected List
sonHandler = new ArrayList<>();
public SchedulingNode setSonHandler(List
nodes) { this.sonHandler = nodes; return this; }
public SchedulingNode setFatherHandler(List
nodes) { this.fatherHandler = nodes; return this; }
public SchedulingNode setSonHandler(SchedulingNode... nodes) { return setSonHandler(Arrays.asList(nodes)); }
public void setFatherHandler(SchedulingNode... nodes) { this.fatherHandler = Arrays.asList(nodes); }Example of linking nodes:
node1.setSonHandler(node2, node3);
node2.setSonHandler(node5).setFatherHandler(node1);
node3.setSonHandler(node4).setFatherHandler(node1);
node4.setSonHandler(node5).setFatherHandler(node3);
node5.setFatherHandler(node2, node4);3.4 Workflow Processor
The abstract class SchedulingNode defines the execution lifecycle, state management, and result handling. Subclasses implement the actual business logic in the task method and optional callbacks for success or failure.
public abstract class SchedulingNode
{
protected String taskName = this.getClass().getSimpleName();
ResultState status = ResultState.DEFAULT;
protected T param;
private volatile WorkResult
workResult = WorkResult.defaultResult(getTaskName());
public abstract V task(TaskContext ctx, T param);
public void onSuccess(TaskSupport support) { /* optional */ }
public void onFail(TaskSupport support) { /* optional */ }
public void execute(ExecutorService executorService, SchedulingNode fromNode,
long remainTime, ConcurrentHashMap
allNodes,
TaskContext taskContext) {
// Core orchestration logic: check dependencies, run self, handle exceptions, propagate results
}
}The processor checks parent node states, decides whether to run the current node, and aggregates results after all children finish.
3.5 Business Nodes
Concrete business nodes extend SchedulingNode and implement the real work (e.g., database calls, HTTP requests). Example:
@Component
public class Node1 extends SchedulingNode
{
@Resource
private SampleService sampleService;
@Override
public String task(TaskContext taskContext, String param) {
// business logic such as DB or HTTP operations
sampleService.findItemById(...);
return "result";
}
@Override
public void onSuccess(TaskSupport support) { /* success handling */ }
@Override
public void onFail(TaskSupport support) { /* failure handling */ }
}Finally, the orchestration is started by constructing the node graph, creating a TaskContext , and invoking the scheduler:
// Build node relationships (as shown earlier)
TaskContext taskContext = new TaskContext();
Map
results = TaskScheduling.start(taskContext, executorService, Arrays.asList(node1), 5000L);4. Summary
The task orchestration framework provides a lightweight solution for coordinating multiple inter‑dependent tasks in backend services. By abstracting dependency management, parallel execution, timeout handling, and result aggregation, it enables developers to concentrate on business logic, improves code maintainability, and enhances system scalability. The article also lists several possible enhancements, such as GUI‑based DAG editing, thread‑pool monitoring, dynamic thread allocation, unified logging, and integration with existing scheduling platforms.
HomeTech
HomeTech tech sharing
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.