Fundamentals 14 min read

Understanding Linux IRQ and SoftIRQ: From Basics to Deferred Handling

This article explains the fundamentals of Linux interrupt handling, covering the concepts of hardware and software interrupts, the IRQ processing flow, maskable versus non‑maskable interrupts, and the three deferred execution mechanisms—softirq, tasklet, and workqueue—along with code examples and practical considerations.

Open Source Linux
Open Source Linux
Open Source Linux
Understanding Linux IRQ and SoftIRQ: From Basics to Deferred Handling

What is an interrupt?

CPU time‑multiplexes many tasks, including hardware tasks like disk I/O and keyboard input, and software tasks such as network packet processing. When a hardware or software event needs immediate attention, it sends an interrupt request, causing the CPU to pause the current work and handle the event.

Hard Interrupt

Interrupt handling flow

When an interrupt occurs, the kernel must handle it immediately:

Preempt current task : the kernel pauses the running process.

Execute interrupt handler : the appropriate handler is called.

Resume preempted process after the handler finishes.

Maskable and non‑maskable

Maskable interrupts on x86_64 can be disabled and enabled with the cli and sti instructions:

static inline void native_irq_disable(void) {
    asm volatile("cli" ::: "memory"); // clear IF flag
}
static inline void native_irq_enable(void) {
    asm volatile("sti" ::: "memory"); // set IF flag
}

Maskable interrupts can be temporarily blocked; most IRQs are maskable, e.g., network card packet interrupts. Non‑maskable interrupts cannot be blocked and are considered more urgent.

Execution speed vs. logical complexity

IRQ handlers must run very quickly to avoid losing events, yet they often need to perform complex logic such as packet reception, creating an inherent tension.

Deferred interrupt handling

Traditionally, the solution is to split interrupt handling into two parts:

Top half – the minimal work that must run in hard‑interrupt context.

Bottom half – the remaining work queued for later execution.

This approach is called deferred or postponed handling. The bottom half runs outside the hard‑interrupt context, typically in a kernel thread.

Soft Interrupt (softirq)

Softirq subsystem

Each CPU creates a ksoftirqd kernel thread responsible for processing softirq events.

Softirq handlers are registered with open_softirq(softirq_id, handler). For example, network TX/TX softirqs are registered as:

// net/core/dev.c
open_softirq(NET_TX_SOFTIRQ, net_tx_action);
open_softirq(NET_RX_SOFTIRQ, net_rx_action);

The CPU softirq overhead can be observed with top; the si field shows softirq CPU usage.

unsigned long end = jiffies + MAX_SOFTIRQ_TIME;
... // processing loop
while ((softirq_bit = ffs(pending))) {
    h->action(h); // softirq handler
    ...
}
if (pending) {
    if (time_before(jiffies, end) && !need_resched() && --max_restart)
        goto restart; // avoid excessive CPU time
}

Softirq execution stack

Both the ksoftirqd thread and the exit path of an IRQ handler eventually invoke __do_softirq(). After an IRQ handler finishes, exiting_irq() checks for pending softirqs and wakes the softirq processing.

// arch/x86/kernel/irq.c
if (!in_interrupt() && local_softirq_pending())
    invoke_softirq();

Softirq processing steps

Register the handler via open_softirq().

Mark a softirq as pending with raise_softirq(), which wakes ksoftirqd.

The scheduler runs ksoftirqd, which processes all pending softirqs.

Tasklet

Tasklets are built on top of softirqs and can be created at runtime. They use the HI_SOFTIRQ and TASKLET_SOFTIRQ softirqs.

void __init softirq_init(void) {
    for_each_possible_cpu(cpu) {
        per_cpu(tasklet_vec, cpu).tail = &per_cpu(tasklet_vec, cpu).head;
        per_cpu(tasklet_hi_vec, cpu).tail = &per_cpu(tasklet_hi_vec, cpu).head;
    }
    open_softirq(TASKLET_SOFTIRQ, tasklet_action);
    open_softirq(HI_SOFTIRQ, tasklet_hi_action);
}

Tasklet structures are defined as:

struct tasklet_struct {
    struct tasklet_struct *next;
    unsigned long state;
    atomic_t count;
    void (*func)(unsigned long);
    unsigned long data;
};

Tasklet execution loops over the per‑CPU list and runs each tasklet's function.

Workqueue

Workqueues provide an asynchronous execution context similar to tasklets but run in process context, allowing blocking operations.

Work items are queued and processed by kernel worker threads ( kworker), e.g.:

$ systemd-cgls -k | grep kworker
├─ 5 [kworker/0:0H]
├─ 15 [kworker/1:0H]
├─ 20 [kworker/2:0H]
├─ 25 [kworker/3:0H]

Key differences:

Tasklets run in softirq context on a specific CPU.

Workqueues run in process context and can be scheduled on any CPU.

Workqueues allow creating dedicated worker threads, unlike softirqs.

Reference

Linux Inside (online book), “Interrupts and Interrupt Handling”

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

KernelLinuxsoftirqDeferred Executioninterrupt handlingirq
Open Source Linux
Written by

Open Source Linux

Focused on sharing Linux/Unix content, covering fundamentals, system development, network programming, automation/operations, cloud computing, and related professional knowledge.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.