Fundamentals 10 min read

Why Simple FIFO Scheduling Fails and How CFS Guarantees Fair CPU Time

The article walks through the evolution of CPU scheduling from a naïve first‑come‑first‑served queue to priority‑based schemes, explains starvation and time‑slice problems, and shows how tracking each task's actual runtime leads to the Completely Fair Scheduler (CFS) with virtual runtime accounting.

IT Services Circle
IT Services Circle
IT Services Circle
Why Simple FIFO Scheduling Fails and How CFS Guarantees Fair CPU Time

Background

In 1965 operating systems moved from single‑task to multi‑task environments, but only one CPU was available while many programs wanted to run simultaneously.

First Attempt: FIFO Queue

The simplest idea was to serve tasks in arrival order, just like a bank line:

void schedule() {
    if (head != tail) {
        struct task *t = &queue[head++];
        t->run(); // execute task
    }
}

This works for trivial cases but quickly leads to severe problems: important programs wait too long, user interaction becomes sluggish, and low‑priority tasks may never get CPU time (starvation).

Introducing Priorities

To avoid starvation, each task is assigned a numeric priority (lower number = higher importance). The scheduler always picks the task with the smallest priority value:

void schedule() {
    struct task *highest = NULL;
    for (int i = 0; i < num_tasks; i++) {
        if (highest == NULL || tasks[i].priority < highest->priority) {
            highest = &tasks[i];
        }
    }
    if (highest) {
        highest->run();
    }
}

While this prevents low‑priority tasks from being completely ignored, it creates a new issue: a high‑priority, long‑running job can monopolize the CPU, causing other tasks (e.g., a logging daemon) to starve.

Fixed Time‑Slice Quotas (1990)

To bound CPU usage, each task receives a fixed time‑slice quota. When the quota expires, the task is moved to a waiting queue and must wait for a new slice.

void schedule() {
    struct task *current = get_highest_priority_task();
    if (current->quota > 0) {
        current->run();
        current->quota -= 1; // consume 1 ms
    } else {
        current->quota = compute_new_quota(current->priority);
        move_to_wait_queue(current);
    }
}

This solves starvation but introduces complexity when trying to identify interactive tasks (e.g., terminal programs) and adjust their priority heuristically.

Heuristic Interactive Detection (Failed)

Developers added rules such as "if a task spends more time sleeping than running, treat it as interactive and boost its priority". The heuristic sometimes misclassifies I/O‑heavy background jobs as interactive, causing them to starve truly interactive processes.

void detect_interactive_task(struct task *t) {
    // If the task sleeps more than it runs, mark it interactive
    if (t->sleep_time > t->run_time) {
        t->is_interactive = 1;
        t->priority -= 5; // raise priority
    }
}

Because the rules grew increasingly complex and produced inconsistent results, the design was reconsidered.

Rethinking Fairness: Track Runtime

The new insight was to record how long each task has actually run and always schedule the task with the smallest accumulated runtime:

struct task {
    int pid;
    uint64 runtime; // total CPU time used
};

void schedule() {
    struct task *next = find_task_with_min_runtime();
    uint64 start = now();
    next->run();
    uint64 delta = now() - start;
    next->runtime += delta; // update runtime
}

With this approach, interactive tasks (which spend most of their life sleeping) accumulate very little runtime, so they naturally rise to the front of the queue without any special heuristics.

Virtual Runtime and CFS

To incorporate task importance, the scheduler multiplies actual runtime by a weight derived from the task's nice value, producing a "virtual runtime" (vruntime). The task with the smallest vruntime is selected next:

struct task {
    int pid;
    int nice;          // -20 (high) to +19 (low)
    u64 vruntime;      // weighted runtime
    int weight;       // derived from nice
};

void schedule() {
    struct task *next = find_task_with_min_vruntime();
    u64 start = now();
    next->run();
    u64 delta = now() - start;
    next->vruntime += delta * 1024 / next->weight; // virtual accounting
}

This mechanism is the core of the Linux Completely Fair Scheduler (CFS). It provides fairness by giving each task a proportionate share of CPU time while still allowing higher‑priority tasks (larger weight) to run more often.

Takeaways

Simple FIFO cannot satisfy real‑world workloads.

Priority alone can cause high‑priority starvation of others.

Fixed time‑slice quotas improve fairness but add complexity.

Tracking actual runtime and using virtual runtime yields a simple, robust, and fair scheduling algorithm (CFS).

operating systemsCPU schedulingCFSprioritytime slicevirtual runtime
IT Services Circle
Written by

IT Services Circle

Delivering cutting-edge internet insights and practical learning resources. We're a passionate and principled IT media platform.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.