From Polling to Interrupts: Understanding Early Operating System Mechanisms
The article explains how early batch-processing systems relied on inefficient polling of slow I/O devices, describes the inspiration from IBM 704's overflow flag, and details the invention and implementation of hardware and software interrupts, including interrupt types, vector tables, and handler functions, to enable efficient CPU‑device interaction.
In the early 1960s, developers building batch‑processing systems had to constantly poll slow external devices such as tape drives and printers, whose response times (e.g., 100 ms for a tape read, over 600 ms for a printer line) were far slower than the CPU.
The typical polling code repeatedly checks the device status in a tight loop, as shown in the example below, which wastes CPU cycles while waiting for the device to become ready:
int poll_count = 0;
// Polling wait for printer ready
while (1) {
poll_count++;
if (check_printer_status() == PRINTER_READY) {
send_to_printer(print_data);
break;
}
}This inefficiency led to the insight that the CPU should not have to poll; instead, the hardware could signal the CPU when an event occurs, similar to how the IBM 704 used an overflow flag (OV) to automatically jump to error‑handling code after arithmetic overflow.
Inspired by this, the article shows a simple assembly‑like snippet illustrating how a program could test the overflow flag and branch to error handling without manual checks:
ADD MQ // AC = AC + MQ, may overflow
TOV ERROR // If OV flag is 1, jump to ERROR
TRA CONTINUE // Otherwise continue execution
ERROR
// error handling code
CONTINUE
// continue normal executionApplying the same idea to division‑by‑zero, modern CPUs can detect a zero divisor instantly and transfer control to a predefined exception handler, eliminating the need for explicit checks in user code.
The concept evolved into the interrupt mechanism: external devices generate a signal when ready, the CPU detects the signal, jumps to a predefined interrupt handler, and then resumes the interrupted task. This design eliminates continuous polling and saves CPU resources.
Interrupts are classified as software interrupts (e.g., divide‑by‑zero, page fault, system call) and hardware interrupts (e.g., printer, disk, timer, keyboard). The article defines an enumeration of interrupt types:
// Interrupt type definitions
typedef enum {
// Hardware interrupts
INT_PRINTER = 0, // printer interrupt
INT_DISK = 1, // disk interrupt
INT_TIMER = 2, // timer interrupt
INT_KEYBOARD = 3, // keyboard interrupt
// Software interrupts
INT_DIVIDE_BY_ZERO = 4, // divide‑by‑zero error
INT_PAGE_FAULT = 5, // page fault
INT_SYSTEM_CALL = 6, // system call
MAX_INTERRUPT_TYPE = 7
} InterruptType;To handle these interrupts, a function pointer type and an interrupt vector table are introduced:
// Interrupt handler function type
typedef void (*InterruptHandler)(void);
// Interrupt vector table structure
typedef struct {
InterruptHandler handlers[MAX_INTERRUPT_TYPE];
bool enabled[MAX_INTERRUPT_TYPE]; // interrupt enable flags
} InterruptVectorTable;The vector table maps each interrupt number to its corresponding handler address, allowing the CPU to look up and invoke the appropriate routine when an interrupt occurs, as illustrated by the simplified handler dispatch function:
void handle_interrupt(InterruptVectorTable* ivt, InterruptType type) {
// ...
ivt->handlers[type]();
// ...
}Overall, the article provides a concise historical and technical overview of how polling gave way to interrupt-driven I/O, laying the foundation for modern operating system design.
IT Services Circle
Delivering cutting-edge internet insights and practical learning resources. We're a passionate and principled IT media platform.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.