Understanding Linux IRQ and SoftIRQ: From Hard Interrupts to Deferred Handling
This article explains the fundamentals of Linux interrupt handling, covering the distinction between hardware and software interrupts, the processing flow of hard IRQs, maskable versus non‑maskable interrupts, the need for deferred execution, and a deep dive into softirqs, tasklets, and workqueues with code examples and performance considerations.
What is an interrupt?
CPU time‑slices many tasks, including hardware operations like disk I/O and keyboard input and software tasks such as network packet processing. When a hardware or software task needs immediate attention, it sends an interrupt request (IRQ) to the CPU, causing the CPU to pause its current work and service the interrupt.
Hard Interrupts
Interrupt handling flow
When an interrupt occurs, the kernel must:
Preempt the current task : pause the running process.
Execute the interrupt handler : invoke the registered handler function.
Resume the preempted task after the handler finishes.
Maskable and non‑maskable
Maskable interrupts on x86_64 can be disabled and re‑enabled with the cli and sti instructions:
static inline void native_irq_disable(void) {
asm volatile("cli" : : : "memory"); // clear IF flag
}
static inline void native_irq_enable(void) {
asm volatile("sti" : : : "memory"); // set IF flag
}Maskable interrupts are blocked during the disabled period, while non‑maskable interrupts cannot be blocked and are considered more urgent.
Problem: speed vs. complexity
IRQ handlers must run extremely fast to avoid event loss, yet they often need to perform complex logic such as packet reception, creating an inherent conflict.
Solution: deferred interrupt handling
Traditionally, the kernel splits interrupt processing into two parts: the top half (executed in hard‑IRQ context) and the bottom half (deferred). This deferred processing is now a generic term covering various mechanisms.
Soft Interrupts (SoftIRQ)
SoftIRQ subsystem
Each CPU runs a kernel thread ksoftirqd that processes pending softirqs.
SoftIRQ handlers are registered with open_softirq(softirq_id, handler). For example, network TX/RX handlers are registered as:
open_softirq(NET_TX_SOFTIRQ, net_tx_action);
open_softirq(NET_RX_SOFTIRQ, net_rx_action);The CPU softirq load can be observed with top; the si column reflects softirq overhead.
Main processing
The event‑driven loop in smpboot.c schedules ksoftirqd, which calls __do_softirq() to:
Determine which softirqs need handling.
Execute the corresponding softirq handlers.
Avoiding excessive CPU usage
Softirqs can consume significant CPU time, shown as a high si value in top. The kernel uses a budget mechanism to limit the time spent in softirq processing:
unsigned long end = jiffies + MAX_SOFTIRQ_TIME;
...
while ((softirq_bit = ffs(pending))) {
h->action(h); // internal mechanism limits CPU usage
...
}
if (pending) {
if (time_before(jiffies, end) && !need_resched() && --max_restart)
goto restart; // avoid long‑running softirq
}Hard‑IRQ → SoftIRQ call stack
After an IRQ handler finishes, irq_exit() checks for pending softirqs and may invoke __do_softirq(). This ensures that deferred work runs after the hard‑IRQ context.
Three deferred execution mechanisms
Linux provides three ways to defer work:
softirq
tasklet
workqueue
Softirq and tasklet run in softirq context, while workqueues run in process context.
softirq
Each CPU has a ksoftirqd thread. Softirqs are statically defined (e.g., NET_RX_SOFTIRQ) and listed in /proc/softirqs:
$ cat /proc/softirqs
HI: 2 0 ...
TIMER: 443727 467971 ...
NET_TX: 57919 65998 ...
NET_RX: 28728 5262341 ...
...Triggering a softirq
void raise_softirq(unsigned int nr) {
local_irq_save(flags);
raise_softirq_irqoff(nr); // wakes ksoftirqd
local_irq_restore(flags);
}
if (!in_interrupt())
wakeup_softirqd();
static void wakeup_softirqd(void) {
struct task_struct *tsk = __this_cpu_read(ksoftirqd);
if (tsk && tsk->state != TASK_RUNNING)
wake_up_process(tsk);
}tasklet
Tasklets are built on top of softirq and can be created at runtime. They use the HI_SOFTIRQ and TASKLET_SOFTIRQ softirqs.
void __init softirq_init(void) {
for_each_possible_cpu(cpu) {
per_cpu(tasklet_vec, cpu).tail = &per_cpu(tasklet_vec, cpu).head;
per_cpu(tasklet_hi_vec, cpu).tail = &per_cpu(tasklet_hi_vec, cpu).head;
}
open_softirq(TASKLET_SOFTIRQ, tasklet_action);
open_softirq(HI_SOFTIRQ, tasklet_hi_action);
}The tasklet structure:
struct tasklet_struct {
struct tasklet_struct *next;
unsigned long state;
atomic_t count;
void (*func)(unsigned long);
unsigned long data;
};workqueue
Workqueues create kernel threads (workers) to execute queued tasks in process context, allowing blocking operations.
// Example from Documentation/core-api/workqueue.rst
There are many cases where an asynchronous process execution context is needed and the workqueue (wq) API is the most commonly used mechanism.Worker threads are visible as kworker processes:
$ systemd-cgls -k | grep kworker
├─ 5 [kworker/0:0H]
├─ 15 [kworker/1:0H]
...References
Linux Inside (online book), "Interrupts and Interrupt Handling".
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Open Source Linux
Focused on sharing Linux/Unix content, covering fundamentals, system development, network programming, automation/operations, cloud computing, and related professional knowledge.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
