Fundamentals 19 min read

Understanding Storage I/O Performance, RAID Write Penalties, and Business Model Characteristics

This article explains how real‑world business requirements drive storage performance configuration, details I/O aggregation and RAID write penalties, compares OLTP, OLAP, VDI and SPC‑1 workloads, and discusses the impact of SSD, SAS, FC links, cache, and RAID levels on throughput and latency.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
Understanding Storage I/O Performance, RAID Write Penalties, and Business Model Characteristics

In practical projects, performance configurations must match real business demands, requiring end‑to‑end analysis of host ports, storage systems, and backend disks; the article lists common performance‑evaluation challenges and best‑practice insights.

IO aggregation to full‑stripe write optimization and write penalty

When I/O is aggregated to the size of a full stripe, no pre‑read is needed and RAID write penalties are avoided; for example, a RAID5‑5 small write requires two pre‑reads and one parity write, effectively expanding one I/O into four I/Os.

During a full‑stripe write, four data blocks are written simultaneously without pre‑read, plus one extra parity block, effectively expanding four I/Os into five, greatly improving efficiency compared with non‑full‑stripe writes.

Storage I/O merging capability varies by vendor; database workloads are typically random I/O, and most vendors cannot achieve full‑stripe merging for all I/Os. The ability to merge depends on two factors: the host‑side I/O model (sequential, contiguous, influenced by host software, block devices, volume‑management, HBA policies) and the storage‑side merging mechanisms (cache, block devices, disks) that sort and combine small I/Os into larger ones.

For sequential small I/Os, merging into full‑stripe writes is usually possible; for random I/Os, merging depends on algorithm implementation and memory size, leading to varying degrees of consolidation.

Typical business models and their I/O characteristics

1. OLTP – Transactions involve very small data reads/writes with many concurrent users; latency requirement is 10‑20 ms. I/O is random small (≈8 KB) on data LUNs with a read/write ratio of ~3:2, while log LUNs see multi‑path sequential small writes.

2. OLAP – Mostly read‑heavy analytical queries that may run for hours; data LUNs see multi‑path large sequential I/O (≈512 KB) with >90 % reads, while temporary LUNs see mixed random I/O (≈200 KB+).

3. VDI – Includes start‑up storm (read‑intensive), login storm (write‑intensive), and steady state; latency around 10 ms. I/O is mixed small sequential with some larger writes; cache hit rates can exceed 80 % in steady state.

4. SPC‑1 – Industry‑standard random I/O benchmark representing typical database and email workloads; read/write ratio ~4:6, I/O size ~4 KB, with noticeable hot‑spot regions.

SSD, SAS, NL‑SAS performance characteristics

FC link bandwidth is a critical factor in overall storage bandwidth calculations. For an 8 Gbps FC link, the theoretical data bandwidth is calculated as:

Link clock × encoding efficiency × FC protocol efficiency ÷ 8 ÷ 1024 ÷ 1024 ≈ 787.5 MB/s (single‑direction). Actual bandwidth is lower due to protocol overhead and hardware limitations.

When estimating maximum read/write bandwidth for a given array, the limiting factor is the minimum of the product’s advertised bandwidth, the aggregate disk bandwidth, and the front‑end link bandwidth.

Write penalties differ by RAID level: RAID5‑5 (4D+1P) incurs one parity write per four data writes; RAID6‑6 (4D+2P) incurs two parity writes, increasing I/O count and latency. The write‑penalty factor dramatically affects performance, especially for write‑heavy workloads.

Cache acceleration plays a vital role: write‑back caching converts synchronous single writes into asynchronous batch writes, improving throughput; read caching reduces latency by serving data directly from cache when hits occur. Full cache hits eliminate disk access entirely, delivering the maximum possible IOPS.

How to distinguish sequential vs. random I/O

Sequential I/O accesses contiguous disk regions, random I/O accesses non‑contiguous regions, and mixed I/O combines both. Sequential I/O generally yields higher performance on both HDDs and SSDs due to reduced seek and rotation overhead.

Impact of I/O size

Small I/O (≤16 KB) is measured in IOPS, while large I/O (≥32 KB) is measured in bandwidth. SPC‑1 evaluates random small I/O (IOPS), whereas SPC‑2 evaluates large I/O (bandwidth). Larger I/O consumes more processing resources, and random I/O performance degrades as block size increases.

RAID level performance and capacity trade‑offs

RAID10, RAID5, and RAID6 exhibit different write‑penalty factors; for the same number of disks, RAID6 provides the best reliability but the lowest usable capacity, while RAID10 offers a balance of performance and redundancy.

Overall, understanding I/O characteristics, RAID write penalties, cache behavior, and bandwidth limits is essential for accurate storage performance evaluation and optimal system design.

CachestorageSSDRAIDIO performancedatabase workloadsFC bandwidth
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.