Big Data 14 min read

Designing Cross‑Period Dependencies in Data Scheduling Systems

This article explains how data scheduling systems manage task execution, ETL processes, and cross‑period dependencies by linking task versions, data partitions, and time parameters, and introduces the offset‑and‑cnt model to express dynamic dependencies in big‑data pipelines.

DataFunTalk
DataFunTalk
DataFunTalk
Designing Cross‑Period Dependencies in Data Scheduling Systems

In the era of big data, the value of data comes from its connections, and task scheduling systems solve the problem of data connection management. A scheduler works like a highway with refueling stations, where data are the vehicles that need periodic refueling.

The core of the workflow is job scheduling management and ETL (Extract, Transform, Load). Scheduling determines which jobs run first and how they run. Open‑source systems such as Azkaban, Airflow and DolphinScheduler implement this using DAGs, but they handle cross‑period task dependencies differently.

To illustrate the challenges, consider a task that writes to a Hive table partition. The partition (e.g., pt=20210718000000 ) must match the time when the data are written, which is the responsibility of the scheduler. A simple example of a task configuration is:

insert overwrite table dwd.tb4 partition(pt=2021071700000)
select xxx from stg.tb1
join stg.tb2
join stg.tb3
...
where stg.tb1.pt='2021071700000'
      stg.tb2.pt='2021071500000'
      stg.tb3.pt='2021071600000'

When the business logic changes and a back‑fill is required, using a static date function (e.g., date_sub(current_date,1) ) leads to incorrect partitions because the reference time shifts. Therefore, a flexible time‑parameter mechanism is needed. Common parameters include -1d (yesterday), 0ws (this Monday), -1ms (first day of last month), etc.

With these parameters, a task can be written as:

insert overwrite table dwd.tb4 partition(pt='${-1d_pt}')
select xxx from stg.tb1
join stg.tb2
join stg.tb3
...
where stg.tb1.pt='${-1d_pt}'
      stg.tb2.pt='${-3d_pt}'
      stg.tb3.pt='${-2d_pt}'

The scheduler must map each time parameter to a concrete task version. The concepts of offset and cnt are introduced for this purpose:

offset : the number of versions away from the latest upstream version that serves as the reference point.

cnt : the number of consecutive versions starting from the offset that need to be depended on.

For example, a daily task that writes to yesterday’s partition has an offset of –1 (one version behind) and cnt = 1. The relationship between task, version, data partition, and time parameter can be visualized in a table, showing how different source tables (stg.tb1, stg.tb2, stg.tb3) map to offsets and cnt values based on their periods (hourly, daily, etc.).

Once the runtime dependencies are established, the scheduler triggers downstream versions after a successful run, and each downstream node checks whether all its upstream versions have completed. This process forms a DAG execution rhythm where tasks are triggered from the source nodes and propagate downstream.

The proposed design is compared with DolphinScheduler’s cross‑period dependency handling. While DolphinScheduler treats cross‑period dependencies as separate tasks (making the system easier to understand but heavier), the offset‑cnt model is more generic and can handle complex scenarios such as daily tasks depending on weekly or monthly tasks. Its advantages are flexibility and broad applicability; its drawbacks are higher learning cost and the need for manual tuning in edge cases.

Overall, the article demonstrates a practical approach to modeling and managing dynamic, cross‑period dependencies in big‑data scheduling pipelines.

big dataDAGETLdata schedulingoffsetcntcross-period dependency
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.