Fundamentals 26 min read

Comprehensive Guide to RAID Levels: Architecture, Fault Tolerance, Performance, and Capacity

This article provides a comprehensive overview of RAID technology, explaining disk groups, virtual disks, detailed characteristics of RAID 0, 1, 1ADM, 5, 6, 10, 10ADM, 1E, 50, and 60, and compares their fault tolerance, I/O performance, and storage capacity considerations.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
Comprehensive Guide to RAID Levels: Architecture, Fault Tolerance, Performance, and Capacity

RAID (Redundant Array of Independent Disks) combines multiple physical drives through a RAID controller into a single virtual large-capacity disk, offering higher storage performance, I/O performance, and reliability than a single drive.

Related reading: In‑Depth Analysis of Disk RAID Key Technologies

1 Disk Groups and Virtual Disks

As data center workloads increase, a single server must handle more data. When a single physical disk cannot meet capacity or reliability requirements, multiple disks are combined in a specific way and presented as a single visible disk to satisfy actual needs. A disk group is a set of physical disks combined to appear as a whole, forming the basis of a virtual disk.

A virtual disk is a continuous storage unit carved out from a disk group, equivalent to an independent disk that, through configuration, offers larger capacity, higher safety, and data redundancy than a single physical disk.

A virtual disk can be:

A complete disk group.

Multiple complete disk groups.

A part of a disk group.

Parts of multiple disk groups (each group contributes a portion to form the virtual disk).

In the following description:

Disk groups are usually referred to as “Drive Group” (DG), “Array”, or “RAID group”.

Virtual disks are referred to as “Virtual Drive”, “Virtual Disk” (VD), “Volume”, or “Logical Device” (LD).

1.1 Introduction to RAID Levels

RAID combines multiple physical disks through a RAID controller into a single large-capacity virtual disk, providing higher storage performance, I/O performance, and reliability.

The RAID controller supports Secure Boot only in EFI/UEFI mode and uses BIOS‑provided authentication mechanisms.

1.1.1 RAID 0

RAID 0, also known as Striping, offers the highest storage performance among all RAID levels. It improves performance by distributing consecutive data across multiple disks, allowing parallel I/O operations. However, it provides no data redundancy, making it suitable only for scenarios that require high I/O speed but low data safety.

Processing Flow

An I/O request to a logical RAID‑0 disk composed of three physical disks is split into three operations, each targeting one physical disk.

By establishing RAID 0, sequential data requests are dispersed across all three disks for simultaneous execution.

The parallel operation of three disks theoretically triples read/write speed, though actual gains are lower due to bus bandwidth and other factors.

Figure: RAID 0 Data Storage Principle

1.1.1 RAID 1

RAID 1, also called Mirroring, duplicates each write to a mirror disk; reads can be served from either disk. When a failed disk is replaced, data can be reconstructed. RAID 1 offers high reliability but reduces usable capacity to half of the total, making it suitable for high‑availability applications such as finance.

Processing Flow

An I/O request to a logical RAID‑1 disk composed of two drives is issued.

When writing to Drive 0, the data is simultaneously copied to Drive 1.

When reading, data is fetched from both Drive 0 and Drive 1.

Figure: RAID 1 Data Storage Principle

1.1.2 RAID 1ADM

RAID 1ADM provides two mirror disks for each working disk. Writes are duplicated to both mirrors, and reads are performed from all three disks. It offers higher reliability than RAID 1 but reduces usable capacity to one‑third, suitable for highly fault‑tolerant scenarios.

Processing Flow

An I/O request to a logical RAID‑1ADM disk composed of three drives is issued.

When writing to Drive 0, the data is simultaneously copied to Drive 1 and Drive 2.

When reading, data is fetched from Drive 0, Drive 1, and Drive 2.

Figure: RAID 1ADM Data Storage Principle

1.1.3 RAID 5

RAID 5 balances performance, data safety, and cost by using distributed parity. Parity data is spread across all member disks, allowing reconstruction of a failed disk using the remaining disks. It suits both large‑scale data operations and smaller transactional workloads.

Processing Flow

Parity blocks PA (for disks A0‑A2) and PB (for disks B0‑B2) are stored on different disks.

RAID 5 does not duplicate data; instead, it stores data and corresponding parity on separate disks. If one disk fails, the missing data can be rebuilt from the remaining data and parity.

RAID 5 can be seen as a compromise between RAID 0 and RAID 1:

It provides data safety better than RAID 0 but with higher disk‑space utilization than RAID 1, and lower storage cost.

Read/write speed is slightly lower than RAID 0, but write performance exceeds that of a single disk.

Figure: RAID 5 Data Storage Principle

1.1.4 RAID 6

RAID 6 adds a second independent parity block to RAID 5, providing very high reliability—data remains accessible even if two disks fail simultaneously. However, the extra parity reduces write performance due to higher “write penalty”.

Processing Flow

Parity PA (first block) and QA (second block) are stored for each data set; similarly for PB/QB, etc.

Data and parity blocks are distributed across all member disks. When one or two disks fail, the controller can reconstruct missing data from the remaining healthy disks.

Figure: RAID 6 Data Storage Principle

1.1.5 RAID 10

RAID 10 combines mirroring and striping (RAID 1 + RAID 0). The first level mirrors disks, the second stripes across mirrored pairs, delivering both high performance and high data safety.

Processing Flow

Drive 0 + Drive 1 form sub‑group 0 (mirrored); Drive 2 + Drive 3 form sub‑group 1. Writes are striped across sub‑groups (RAID 0) while each write is mirrored within the sub‑group (RAID 1).

Figure: RAID 10 Data Storage Principle

1.1.6 RAID 10ADM

RAID 10ADM combines RAID 1ADM with RAID 0. The first level provides triple mirroring, the second level stripes across the mirrored groups, offering the same safety as RAID 1ADM with performance close to RAID 0.

Processing Flow

Drive 0, 1, 2 form sub‑group 0 (triple‑mirrored); Drive 3, 4, 5 form sub‑group 1. Writes are striped across sub‑groups while each write is duplicated to the two mirrors within the sub‑group.

Figure: RAID 10ADM Data Storage Principle

1.1.7 RAID 1E

RAID 1E is an enhanced version of RAID 1 that distributes mirrored data across all disks in the logical drive. It requires at least three disks, provides half‑capacity redundancy, and allows higher disk counts than classic RAID 1.

Processing Flow

An I/O request to a three‑disk RAID 1E group is issued. Data stripes are evenly spread across the three disks, each stripe having a backup on another disk, so a single disk failure does not cause data loss.

Figure: RAID 1E Data Storage Principle

1.1.8 RAID 50

RAID 50 (RAID 5 + RAID 0) stripes data across multiple RAID 5 sub‑groups. It inherits RAID 5’s parity‑based redundancy and RAID 0’s striping performance, allowing multiple sub‑group failures to be tolerated while delivering high I/O throughput.

Processing Flow

Parity blocks PA, PB, etc., are stored as in RAID 5. Data is striped across RAID 5 sub‑groups (RAID 0), so each sub‑group can survive a single disk failure without service interruption.

Figure: RAID 50 Data Storage Principle

1.1.9 RAID 60

RAID 60 (RAID 6 + RAID 0) combines double‑parity RAID 6 with striping. It provides higher fault tolerance (two simultaneous disk failures per sub‑group) at the cost of reduced write performance.

Processing Flow

Each data set has two parity blocks (PA, QA, etc.). Data is striped across RAID 6 sub‑groups, allowing any two disks in a sub‑group to fail without data loss.

Figure: RAID 60 Data Storage Principle

1.1.10 Fault Tolerance

RAID 0 provides no fault tolerance; any disk failure results in data loss.

RAID 1 offers 100% redundancy; a single disk failure can be recovered from its mirror.

RAID 5 uses distributed parity to survive one disk failure.

RAID 6 uses double parity to survive two simultaneous disk failures.

RAID 10 provides full redundancy through mirrored pairs.

RAID 50 tolerates one disk failure per RAID 5 sub‑group.

RAID 60 tolerates two disk failures per RAID 6 sub‑group.

1.1.11 I/O Performance

RAID groups, by accessing multiple disks in parallel, achieve higher I/O rates than single disks.

RAID 0 delivers excellent performance through striping.

RAID 1’s mirroring halves write speed due to duplicate writes.

RAID 5 offers good throughput thanks to independent reads/writes and caching.

RAID 6 provides high reliability but incurs write performance loss due to two parity writes.

RAID 10 combines RAID 1’s safety with RAID 0’s speed; performance improves with more sub‑groups.

RAID 50 shows the best performance in high‑reliability scenarios, scaling with sub‑group count.

RAID 60, while fault‑tolerant, suffers write performance degradation because each disk writes two parity sets.

When mixed RAID groups (with and without parity) share a controller and use Write‑Back caching, parity‑protected groups may experience slower performance; setting non‑parity groups to Write‑Through can mitigate this.

1.1.12 Storage Capacity

Capacity considerations differ per RAID level:

RAID 0: Usable capacity = smallest disk size × number of disks.

RAID 1: Usable capacity = smallest disk size (mirrored).

RAID 5: Usable capacity = smallest disk size × (disk count − 1).

RAID 6: Usable capacity = smallest disk size × (disk count − 2).

RAID 10, RAID 50, RAID 60: Usable capacity = sum of sub‑group capacities.

《Server Fundamentals Full Guide (Ultimate Edition)》 Sister Volume 《Storage System Fundamentals Full Guide》 has been updated and released. Readers who have purchased the “Architect Technical Full‑Store Pack (All)” can obtain a free PDF version by leaving a comment with their purchase record.

Disclaimer: Thanks to the original author for their effort. All reproduced articles are properly credited; please contact us for any copyright issues.

Recommended Reading

1. Order the “Architect Technical Full‑Store Pack (All)” now to receive free PDF/PPT versions of “Server Fundamentals Full Guide (Ultimate Edition)” and “Storage System Fundamentals Full Guide”. The bundled price is ¥249 (originally ¥439).

Warm Reminder: Scan the QR code, follow the public account, and click the original article link to obtain the “Architect Technical Full‑Store Pack (All)”.

performancefault tolerancestorageData RedundancyRAIDcapacity
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.