Operations 12 min read

Decoding iostat: How to Interpret Linux I/O Metrics Correctly

This article explains the meaning of iostat fields, the limitations of svctm and await, how /proc/diskstats provides raw counters, and offers formulas and examples for accurately analyzing Linux disk performance.

Efficient Ops
Efficient Ops
Efficient Ops
Decoding iostat: How to Interpret Linux I/O Metrics Correctly

iostat is the basic Linux tool for I/O performance, but its fields are often misunderstood, especially compared to HP‑UX where avserv represents disk service time.

In Linux the svctm field is deprecated; the man pages warn not to trust it.

The average I/O time is shown by await, which mixes device processing time and queue waiting time, so it does not reflect pure disk speed. The kernel’s /proc/diskstats provides the raw counters needed to understand iostat output.

<code># cat /proc/diskstats
   8       0 sda 239219 1806 37281259 2513275 904326 88832 50268824 26816609 0 4753060 29329105
   8       1 sda1 338 0 53241 6959 154 0 5496 3724 0 6337 10683
   8       2 sda2 238695 1797 37226458 2504489 620322 88832 50263328 25266599 0 3297988 27770221
   8      16 sdb 1009117 481 1011773 127319 0 0 0 0 0 126604 126604
   8      17 sdb1 1008792 480 1010929 127078 0 0 0 0 0 126363 126363
  253       0 dm-0 1005 0 8040 15137 30146 0 241168 2490230 0 30911 2505369
  253       1 dm-1 192791 0 35500457 2376087 359162 0 44095600 22949466 0 2312433 25325563
  253       2 dm-2 47132 0 1717329 183565 496207 0 5926560 7348763 0 2517753 7532688</code>

/proc/diskstats contains eleven fields; all are cumulative since boot except field #9 (io_ticks) which is a time counter.

(rd_ios) – number of read I/Os.

(rd_merges) – number of merged read I/Os (adjacent reads combined by the I/O scheduler).

(rd_sectors) – sectors read.

(rd_ticks) – time spent on reads (including queue wait).

(wr_ios) – number of write I/Os.

(wr_merges) – number of merged writes.

(wr_sectors) – sectors written.

(wr_ticks) – time spent on writes.

(in_flight) – number of I/Os currently in progress (incremented when request enters queue).

(io_ticks) – total time the device was busy processing I/Os (wall‑clock time, not per‑I/O).

(time_in_queue) – weighted io_ticks by the number of in‑flight I/Os; used for avgqu‑sz.

Formulas (Δ denotes difference between two samples, Δt the sampling interval):

tps = (Δrd_ios + Δwr_ios) / Δt

r/s = Δrd_ios / Δt

w/s = Δwr_ios / Δt

rkB/s = (Δrd_sectors / Δt) * 512 / 1024

wkB/s = (Δwr_sectors / Δt) * 512 / 1024

rrqm/s = Δrd_merges / Δt

wrqm/s = Δwr_merges / Δt

avgrq‑sz = (Δrd_sectors + Δwr_sectors) / (Δrd_ios + Δwr_ios)

avgqu‑sz = Δtime_in_queue / Δt

await = (Δrd_ticks + Δwr_ticks) / (Δrd_ios + Δwr_ios)

r_await = Δrd_ticks / Δrd_ios

w_await = Δwr_ticks / Δwr_ios

%util = Δio_ticks / Δt (percentage of time the device was busy)

svctm – deprecated, roughly util / tps.

Example: a stress test on several identical disks showed one disk (sdb) with non‑zero rrqm/s, indicating more I/O merging and higher performance, which was traced to a different I/O scheduler configured for that device.

%util reflects the proportion of time the device was busy, not the number of concurrent I/Os; even 100 % does not necessarily mean saturation because modern disks handle multiple I/Os in parallel.

For SSDs, typical await values are below 2 ms; for 7200 RPM HDDs a typical service time is about 8.4 ms. Whether a particular await is problematic depends on workload characteristics and expected latency.

<code>Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq‑sz avgqu‑sz await svctm %util
sdg   0.00   0.00 133.00 0.00 2128.00 0.00 16.00 1.00 7.50 7.49 99.60</code>
monitoringLinuxI/O performanceiostatdiskstats
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.