Backend Development 12 min read

Implementing a Timing Wheel for RPC Timeout and Heartbeat Tasks

This article explains the problems caused by naive timer implementations in high‑concurrency RPC frameworks and introduces the timing‑wheel mechanism, illustrating its principles, multi‑level design, and practical applications such as request timeout, startup timeout, and heartbeat handling.

Code Ape Tech Column
Code Ape Tech Column
Code Ape Tech Column
Implementing a Timing Wheel for RPC Timeout and Heartbeat Tasks

Hello everyone, I am Chen.

My "Spring Cloud Alibaba Practical Project" video tutorial is finished, covering Alibaba middleware, OAuth2 microservice authentication, gray release, and distributed transactions. You can subscribe to the Spring Cloud Alibaba video series here.

Today’s article introduces how to use a timing wheel for scheduled tasks in RPC, such as client‑side timeout handling and heartbeat checks.

What problems do scheduled tasks bring?

Before discussing the timing wheel, let’s look at scheduled tasks in RPC. For example, when a client sends a request, it creates a Future and stores the request ID. If the server does not respond in time, the client must handle the timeout.

One simple implementation creates a new thread for each Future and sleeps until the timeout, but this quickly leads to an explosion of threads under high concurrency.

A better approach uses a single thread that scans all pending Futures every 100 ms and triggers timeout logic when needed. This reduces the number of threads but still incurs heavy CPU usage because the scanning thread repeatedly traverses many tasks.

What is a timing wheel?

The timing wheel reduces unnecessary scans by aligning task execution with time slots, similar to how a clock’s second, minute, and hour hands move.

In a timing wheel, each slot represents a time slice (like a clock tick). Tasks are placed into the slot corresponding to their execution time, and the wheel advances at fixed intervals, processing tasks in the current slot.

Timing wheels can have multiple layers; each higher layer’s slot duration equals the full cycle of the lower layer, allowing efficient handling of both short‑ and long‑duration timers.

Example scenario: a wheel with 10 slots, each representing 100 ms (full cycle 1 s). A second layer has 10 slots of 1 s each. Three tasks are added: A (90 ms), B (610 ms), and C (1 s 610 ms). Task A lands in slot 0 and executes immediately; after 600 ms the wheel reaches slot 6 and executes task B; after the wheel completes a full rotation, the second‑level slot moves and task C is promoted to the lower wheel and executed at the appropriate time.

Application of the timing wheel in RPC

Any RPC feature that requires scheduling can use a timing wheel, such as client‑side request timeout, client/server startup timeout, and periodic heartbeat messages.

For repeated tasks like heartbeats, the task can be re‑inserted into the wheel after execution.

Summary

The timing wheel efficiently handles scheduled tasks in RPC frameworks, avoiding the overhead of creating a thread per task and reducing CPU waste caused by frequent full‑scan loops.

Key points to consider when configuring a timing wheel:

The shorter the slot interval, the more precise the timer, but also the higher the potential overhead.

More slots reduce the probability of a task being scanned multiple times; multi‑level wheels help balance precision and scalability.

By adjusting the wheel’s period and slot count to match specific business scenarios, you can achieve efficient timeout handling, startup checks, and heartbeat mechanisms in RPC systems.

backenddistributed systemsRPCTimeoutheartbeattiming wheel
Code Ape Tech Column
Written by

Code Ape Tech Column

Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.