Implementing Periodic Tasks in Python: while‑loop, Timeloop, sched, schedule, APScheduler, Celery, and Airflow
This article reviews several Python approaches for creating scheduled or periodic jobs—including a simple while‑True loop with sleep, the Timeloop library, the built‑in sched module, the schedule package, APScheduler, Celery, and Apache Airflow—explaining their usage, advantages, limitations, and providing ready‑to‑run code samples.
In everyday development we often need tasks that run on a regular schedule. Python offers many ways to achieve this, from low‑level loops to full‑featured workflow engines.
Using while True + time.sleep()
The sleep(secs) function from the time module blocks the current thread for the specified number of seconds, allowing a simple infinite loop to act as a timer.
<code>import datetime</code>
<code>import time</code>
<code>def time_printer():</code>
<code> now = datetime.datetime.now()</code>
<code> ts = now.strftime('%Y-%m-%d %H:%M:%S')</code>
<code> print('do func time :', ts)</code>
<code>def loop_monitor():</code>
<code> while True:</code>
<code> time_printer()</code>
<code> time.sleep(5) # pause 5 seconds</code>
<code>if __name__ == "__main__":</code>
<code> loop_monitor()</code>Main drawback: it can only specify an interval, not an exact clock time, and the thread is blocked during the sleep period.
Timeloop library
Timeloop provides a decorator‑based API to run functions at fixed intervals in a separate thread.
<code>import time</code>
<code>from timeloop import Timeloop</code>
<code>from datetime import timedelta</code>
<code>tl = Timeloop()</code>
<code>@tl.job(interval=timedelta(seconds=2))</code>
<code>def sample_job_every_2s():</code>
<code> print "2s job current time : {}".format(time.ctime())</code>
<code>@tl.job(interval=timedelta(seconds=5))</code>
<code>def sample_job_every_5s():</code>
<code> print "5s job current time : {}".format(time.ctime())</code>
<code>@tl.job(interval=timedelta(seconds=10))</code>
<code>def sample_job_every_10s():</code>
<code> print "10s job current time : {}".format(time.ctime())</code>Built‑in sched module
The sched.scheduler class combines a time function and a delay function (usually time.time and time.sleep ) to schedule events with optional priorities.
<code>import datetime</code>
<code>import time</code>
<code>import sched</code>
<code>def time_printer():</code>
<code> now = datetime.datetime.now()</code>
<code> ts = now.strftime('%Y-%m-%d %H:%M:%S')</code>
<code> print('do func time :', ts)</code>
<code>def loop_monitor():</code>
<code> s = sched.scheduler(time.time, time.sleep)</code>
<code> s.enter(5, 1, time_printer, ())</code>
<code> s.run()</code>
<code>if __name__ == "__main__":</code>
<code> loop_monitor()</code>Key methods: enter(delay, priority, action, argument) , cancel(event) , and run() . It avoids the constant loop of while True but still blocks during waiting.
schedule third‑party package
schedule offers a human‑readable syntax for defining jobs that run every few seconds, minutes, hours, or at specific times.
<code>import schedule</code>
<code>import time</code>
<code>def job():</code>
<code> print("I'm working...")</code>
<code>schedule.every(10).seconds.do(job)</code>
<code>schedule.every().hour.do(job)</code>
<code>schedule.every().day.at("10:30").do(job)</code>
<code>while True:</code>
<code> schedule.run_pending()</code>
<code> time.sleep(1)</code>It also supports tags, parameter passing, job cancellation, and one‑time execution via schedule.CancelJob .
Parallel execution with threads
Combining schedule with the threading module lets each job run in its own thread, preventing a long‑running job from blocking others.
<code>import threading</code>
<code>import time</code>
<code>import schedule</code>
<code>def job1():</code>
<code> print("I'm running on thread %s" % threading.current_thread())</code>
<code>def run_threaded(job_func):</code>
<code> job_thread = threading.Thread(target=job_func)</code>
<code> job_thread.start()</code>
<code>schedule.every(10).seconds.do(run_threaded, job1)</code>
<code>while True:</code>
<code> schedule.run_pending()</code>
<code> time.sleep(1)</code>APScheduler (Advanced Python Scheduler)
APScheduler implements Quartz‑style scheduling with triggers, job stores, and executors, supporting date‑based, interval‑based, and cron‑style jobs, plus persistence.
<code>from apscheduler.schedulers.blocking import BlockingScheduler</code>
<code>from datetime import datetime</code>
<code>def job():</code>
<code> print(datetime.now().strftime("%Y-%m-%d %H:%M:%S"))</code>
<code>sched = BlockingScheduler()</code>
<code>sched.add_job(job, 'interval', seconds=5, id='my_job_id')</code>
<code>sched.start()</code>Celery
Celery is a distributed task queue that can also schedule periodic jobs via celery beat . It requires a broker (RabbitMQ, Redis, etc.) and a result backend.
Celery Beat reads a schedule and enqueues tasks.
Workers consume tasks from the broker and execute them.
Result backend stores execution results.
Apache Airflow
Airflow defines workflows as Directed Acyclic Graphs (DAGs) of tasks. It provides many built‑in operators (BashOperator, PythonOperator, EmailOperator, etc.) and supports branching, scheduling, and monitoring.
Typical DAG example: task T1 runs, then T2 and T3 run in parallel, and finally T4 runs after both finish.
Airflow can be run in standalone mode or in a distributed setup with a scheduler, workers, and a metadata database.
Overall, the article gives a comparative overview of simple loops, lightweight libraries, full‑featured schedulers, and distributed workflow engines for Python‑based periodic task execution.
Python Programming Learning Circle
A global community of Chinese Python developers offering technical articles, columns, original video tutorials, and problem sets. Topics include web full‑stack development, web scraping, data analysis, natural language processing, image processing, machine learning, automated testing, DevOps automation, and big data.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.