Fundamentals 19 min read

Master Disk I/O Basics: A Complete Primer

This article explains what disk I/O is, compares HDD and SSD characteristics, describes the Linux I/O stack, outlines performance metrics such as throughput, IOPS and latency, and reviews factors, tools, caching strategies, and RAID configurations that affect storage performance.

Linux Kernel Journey
Linux Kernel Journey
Linux Kernel Journey
Master Disk I/O Basics: A Complete Primer

1. What Is Disk I/O?

Disk I/O is the process of moving data between a computer and its storage devices. The article distinguishes two main storage types: mechanical hard drives (HDD) that use rotating platters and a read/write head, and solid‑state drives (SSD) that use flash memory and have no moving parts.

For HDDs, sequential I/O is fast because the head can stay on a contiguous track, while random I/O is slower due to frequent seek operations.

SSDs outperform HDDs for both sequential and random I/O, but random writes still incur extra latency because of erase‑before‑write cycles.

Both device types have a minimum read/write unit (512 B sector for HDDs, 4 KB‑8 KB page for SSDs) and use logical blocks (commonly 4 KB) to improve efficiency.

2. Disk Interfaces and Device Naming

Common interfaces include IDE, SCSI, SAS, SATA, and Fibre Channel, each assigning different device name prefixes (e.g., hd for IDE, sd for SCSI/SATA). Multiple disks can be partitioned (e.g., /dev/sda1, /dev/sda2) or combined into logical arrays.

3. Linux I/O Stack

The Linux storage stack consists of three layers:

File‑system layer : virtual file system and concrete file‑system implementations provide a standard API to applications.

Generic block layer : queues I/O requests, performs scheduling, merging, and passes them to the device layer.

Device layer : drivers interact with the physical storage.

Because I/O is often the slowest part of a system, Linux uses multiple caches (page cache, inode cache, directory cache, block buffers) to reduce direct device access.

4. Performance Metrics

Key metrics are:

Throughput : amount of data transferred per unit time; important for sequential workloads such as video streaming.

IOPS (Input/Output Operations Per Second): number of I/O requests processed per second; critical for random‑access workloads. Approximate formula: IOPS = 1000 ms / (Tseek + Trotation + Transfer).

Response time : latency from request issuance to completion, including seek time, rotation delay, and transfer time.

Typical maximum random IOPS values are 76 IOPS for 7200 rpm HDDs, 111 IOPS for 10000 rpm HDDs, and 166 IOPS for 15000 rpm HDDs.

5. Factors Influencing Disk I/O Performance

Hardware : device type, rotation speed, cache size, interface (SATA, NVMe, etc.). Example command to view devices: # lsblk File system : choice among EXT4, XFS, Btrfs affects performance. Example command: # df -T System configuration : kernel parameters, I/O scheduler, mount options. Example to view scheduler: # cat /sys/block/sda/queue/scheduler Application I/O pattern : block size, read/write ratio, synchronous vs. asynchronous I/O. Sample C code demonstrates synchronous file write and a placeholder for asynchronous I/O.

6. Disk I/O Benchmark Tools

iostat : monitors device and CPU usage. Install with sudo yum install sysstat and run iostat -mx 1.

iotop : shows per‑process I/O rates. Install with sudo yum install iotop and run iotop -o.

dd : simple read/write benchmark. Example: dd if=/dev/zero of=testfile bs=1M count=1024 oflag=dsync for write, and dd if=testfile of=/dev/null bs=1M count=1024 iflag=nocache for read.

fio : flexible I/O workload generator. Example:

fio --name=test --ioengine=libaio --iodepth=4 --rw=readwrite --bs=4k --size=1G --numjobs=1

.

7. Disk Caching and I/O Cache Strategies

Modern disks have built‑in caches managed automatically by the OS, but administrators can query and adjust cache policies (e.g., hdparm -c /dev/sda, hdparm -W 1 /dev/sda) and monitor cache hit rates with iostat -dx 1.

Linux I/O cache can be tuned via vm.dirty_ratio and vm.dirty_background_ratio to control when dirty pages are flushed. Direct I/O (bypassing cache) can be used for specific workloads, and asynchronous I/O can be combined with caching for better responsiveness.

8. RAID and Disk Arrays

RAID combines multiple disks into a logical unit to improve performance and reliability. Common levels:

RAID 0 : striping for maximum speed, no redundancy. Created with

mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sda /dev/sdb

.

RAID 1 : mirroring for redundancy, 50% usable space. Created with

mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sda /dev/sdb

.

RAID 5 : striping with distributed parity, balances performance and fault tolerance. Created with

mdadm --create /dev/md5 --level=5 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc

.

RAID 10 : combination of mirroring and striping, offering both speed and redundancy. Created with

mdadm --create /dev/md10 --level=10 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd

.

RAID devices are managed with mdadm, monitored via cat /proc/mdstat, and performance can be observed with iostat -dxm /dev/md0 1. Proper RAID selection balances throughput, latency, data safety, and cost.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

performance metricsDisk I/ORAIDLinux storageI/O tools
Linux Kernel Journey
Written by

Linux Kernel Journey

Linux Kernel Journey

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.