Fundamentals 16 min read

Implementation Details of Scheduler and Context in libunifex (CPU Thread Execution Context)

The article explains libunifex’s CPU‑thread scheduler architecture, detailing how a lightweight scheduler wraps a manual_event_loop execution context with a mutex‑protected FIFO task queue, how operations bridge receivers to the context, and outlines various thread‑bound and platform‑specific scheduler variants.

Tencent Cloud Developer
Tencent Cloud Developer
Tencent Cloud Developer
Implementation Details of Scheduler and Context in libunifex (CPU Thread Execution Context)

This article continues the deep dive into libunifex concepts, focusing on the implementation details of the scheduler, which is the execution foundation. The discussion is limited to CPU‑thread type execution contexts and does not involve heterogeneous execution contexts.

1. Scheduler Overview

The scheduler in libunifex is a lightweight wrapper; the actual asynchronous task execution is performed by the underlying Execution Context. For non‑heterogeneous implementations, the Execution Context usually represents a single CPU thread or a thread pool. The scheduler wraps the Execution Context, and the pipeline only interacts with the scheduler while the scheduler internally contains the Execution Context implementation.

2. Context Implementation (manual_event_loop)

The manual_event_loop provides a simple FIFO task queue using a linked list, protected by a std::mutex and a std::condition_variable . Tasks are represented by task_base objects.

struct manual_event_loop::task_base {
  using execute_fn = void(task_base*) noexcept;
  explicit task_base(execute_fn* execute) noexcept : execute_(execute) {}
  void execute() noexcept { this->execute_(this); }
  task_base* next_ = nullptr;
  execute_fn* execute_;
};

The context class runs tasks in a loop, waiting on the condition variable when the queue is empty and processing tasks sequentially.

class manual_event_loop::context {
  void run() {
    std::unique_lock lock{mutex_};
    while (true) {
      while (head_ == nullptr) {
        if (stop_) return;
        cv_.wait(lock);
      }
      auto* task = head_;
      head_ = task->next_;
      if (head_ == nullptr) tail_ = nullptr;
      lock.unlock();
      task->execute();
      lock.lock();
    }
  }
  void stop() { std::unique_lock lock{mutex_}; stop_ = true; cv_.notify_all(); }
private:
  void enqueue(task_base* task) {
    std::unique_lock lock{mutex_};
    if (head_ == nullptr) head_ = task; else tail_->next_ = task;
    tail_ = task; task->next_ = nullptr; cv_.notify_one();
  }
  std::mutex mutex_; std::condition_variable cv_; task_base* head_ = nullptr; task_base* tail_ = nullptr; bool stop_ = false;
};

3. Bridging Scheduler and Execution

The operation class inherits from task_base and connects a receiver to the execution framework. Its start() method enqueues the operation into the context’s task queue.

template
class operation final : task_base {
public:
  explicit operation(Receiver&& receiver, context* loop)
    : task_base(&operation::execute_impl), receiver_(std::forward
(receiver)), loop_(loop) {}
  void start() noexcept { loop_->enqueue(this); }
private:
  static void execute_impl(task_base* t) noexcept {
    auto& self = *static_cast
(t);
    execution::set_value(std::move(self.receiver_));
    // ... other logic omitted
  }
  Receiver receiver_;
  context* const loop_;
};

The scheduler provides a schedule_task that creates an operation via connect and returns it to the pipeline.

class schedule_task {
public:
  template
operation
connect(Receiver&& receiver) const {
    return operation
{std::forward
(receiver), loop_};
  }
private:
  explicit schedule_task(context* loop) noexcept : loop_(loop) {}
  context* const loop_;
};

The scheduler holds a pointer to the context and creates schedule_task objects.

class scheduler {
public:
  explicit scheduler(context* loop) noexcept : loop_(loop) {}
  schedule_task schedule() const noexcept { return schedule_task{loop_}; }
private:
  context* loop_;
};

4. Thread‑Bound Contexts

To bind a context to a physical thread, single_thread_context wraps a manual_event_loop and launches a dedicated thread that runs the loop.

class single_thread_context {
  manual_event_loop loop_;
  std::thread thread_;
public:
  single_thread_context() : loop_(), thread_([this]{ loop_.run(); }) {}
  ~single_thread_context() { loop_.stop(); thread_.join(); }
  auto get_scheduler() noexcept { return loop_.get_scheduler(); }
  std::thread::id get_thread_id() const noexcept { return thread_.get_id(); }
};

5. Other Scheduler Implementations

The article lists additional scheduler/context types provided by libunifex, such as inline_scheduler , new_thread_context , static_thread_pool , thread_unsafe_event_loop , timed_single_thread_context , linux::io_uring_context , and Windows‑specific schedulers. These illustrate how libunifex can be extended to various platforms and performance requirements.

6. Summary

libunifex’s scheduler implementation is not a one‑size‑fits‑all solution; it serves as a reference for bridging custom thread pools or event loops with the C++ execution framework. Understanding the core concepts—task management, context‑scheduler bridging, and operation state lifetimes—helps developers adapt libunifex to their specific backend concurrency needs.

concurrencySchedulercthread poolAsyncExecution Contextlibunifex
Tencent Cloud Developer
Written by

Tencent Cloud Developer

Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.