How Does an OS Manage I/O? From Devices to DMA and Interrupts
This article explains the fundamentals of operating‑system I/O management, covering device types, controllers, memory‑mapped I/O, I/O ports, interrupt handling, programmable I/O, DMA, disk scheduling, RAID, stable storage, and clock/timer mechanisms.
I/O Devices and Types
An I/O device is external hardware that communicates with a computer by sending data (output) and receiving data (input). Devices are classified as block devices that store fixed‑size blocks and support random access, and character devices that transmit a stream of characters without block structure.
Block Devices
Block devices store data in fixed‑size blocks (typically 512‑65536 bytes) each with a physical address. Common examples are hard disks, Blu‑ray discs, and USB drives. Because blocks are independent, they can be read or written individually, but accessing a part of a block requires reading the whole block first.
Character Devices
Character devices transfer data one character at a time, are non‑seekable, and include printers, network interfaces, mice, and many other peripherals.
Device Controllers
A device controller mediates between the CPU and attached devices. It receives commands from the CPU, stores them in special‑purpose registers, and exchanges data with the device. Controllers may be connected via PCIe slots or other bus interfaces.
Special‑purpose registers are dedicated to a single task (e.g., segment registers cs, ds, gs). General‑purpose registers (eax, ebx, etc.) can be used for any purpose.
Memory‑Mapped I/O vs. Port‑Mapped I/O
Controllers expose registers that the OS can read or write. Two common access methods exist:
Assign each register an I/O port number (8‑ or 16‑bit). The CPU uses special assembly instructions such as IN REG,PORT and OUT PORT,REG to transfer data.
Map the registers into the regular memory address space (memory‑mapped I/O). The registers appear as ordinary memory locations and can be accessed with normal load/store instructions in C or C++.
Memory‑mapped I/O simplifies programming because the registers are treated like normal variables, but it may require cache management to avoid stale data.
Direct Memory Access (DMA)
DMA allows a peripheral to transfer data directly to or from main memory without CPU intervention, freeing the CPU for other work. A DMA controller has registers for source/destination addresses, byte count, and control flags. Typical DMA operation:
CPU programs the DMA controller with source, destination, and transfer size.
DMA controller signals the device to start the transfer.
Data moves over the system bus while the CPU continues executing.
When the transfer completes, the DMA controller raises an interrupt.
DMA can operate in burst mode (continuous transfers) or cycle‑stealing (interleaving with CPU cycles).
Interrupt Handling
When an I/O device finishes its operation, it raises an interrupt. The CPU saves its state, jumps to the interrupt service routine (ISR), and the ISR interacts with the device controller to retrieve status or data. After handling, the ISR restores the CPU state and resumes the interrupted task.
Interrupts can be non‑nested, nested, re‑entrant, or prioritized (simple, standard, high, grouped). Typical ISR steps include saving registers, setting up context, acknowledging the interrupt, processing data, and restoring registers.
I/O Software Layers
Operating‑system I/O software is organized into four layers:
Device‑independent interface (provides uniform naming and abstraction).
Device driver (character or block).
Interrupt handling layer.
Hardware controller layer.
Device‑independent software offers a common API so applications can read/write any device without knowing its specifics.
Disk Scheduling Algorithms
Disk performance is dominated by seek time, rotational latency, and transfer time. Common scheduling policies:
FCFS (First‑Come‑First‑Served) – simple but often inefficient.
Shortest‑Seek‑First (SSF) – selects the pending request closest to the current head position, reducing total head movement.
Elevator (SCAN) – moves the head in one direction servicing requests until the end, then reverses direction.
These algorithms trade off fairness and efficiency.
RAID and Stable Storage
RAID combines multiple disks for performance or redundancy (levels 0‑6). Stable storage ensures that a write either fully succeeds on both mirrored disks or leaves the data unchanged, using techniques such as double‑write verification and crash recovery.
Clocks and Timers
Hardware clocks generate periodic interrupts (e.g., 50‑60 Hz from mains or programmable timers). Clock software uses these interrupts to maintain system time, enforce CPU time slices, collect statistics, and provide watchdog functionality. Soft timers are implemented in software to avoid frequent hardware interrupts.
Core Takeaways
Effective I/O management requires understanding device types, controller interfaces, memory‑mapped versus port‑mapped access, DMA for high‑throughput transfers, and robust interrupt handling. Layered I/O software, proper disk scheduling, and reliable storage techniques such as RAID and stable storage further improve system performance and reliability.
Liangxu Linux
Liangxu, a self‑taught IT professional now working as a Linux development engineer at a Fortune 500 multinational, shares extensive Linux knowledge—fundamentals, applications, tools, plus Git, databases, Raspberry Pi, etc. (Reply “Linux” to receive essential resources.)
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
