Fundamentals 12 min read

All‑Flash Storage System Architecture and Key Functions (Dorado Flash Product Example)

The article explains the fully interconnected architecture of an all‑flash storage system, covering redundant FRU modules, RDMA‑based high‑speed networking, intelligent disk enclosures, SSD structure, wear‑leveling, bad‑block management, data redundancy, and the differences between SAS and NVMe protocols.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
All‑Flash Storage System Architecture and Key Functions (Dorado Flash Product Example)

The all‑flash enterprise storage system typically adopts a fully interconnected architecture where front‑end interface modules, controller modules, back‑end interface modules, power modules, BBU modules, fan modules, and all FRU units have no single points of failure, supporting two or more FRU redundancies and hot‑swappable components.

Network interconnect uses RDMA high‑speed networking to achieve low‑latency global cache sharing; the back‑end employs intelligent disk enclosures equipped with CPUs and memory to offload reconstruction tasks from the controller.

Using the Dorado flash product as an example, the article discusses the architecture and key functions of an all‑flash storage system.

Dorado supports NVMe disk enclosures, providing unified SAN and NAS storage services with end‑to‑end low‑latency protocols such as FC‑NVMe, FC, iSCSI, NVMe over RoCE, and NFS/CIFS/NDMP. RDMA enables mirroring between two controllers and scale‑out connections between multiple control frames.

1. Storage Controllers

Symmetric active‑active architecture with two controllers that handle failover and load balancing; the controllers mirror each other via RDMA.

Figure: Dorado flash product overview

2. Disk Enclosure Introduction

Supports SAS and NVMe disk enclosures; each enclosure contains a CPU and DDR memory, forming an independent compute system that offloads controller tasks such as reconstruction calculations, thereby reducing controller load.

Conventional PCIe RC systems have limited PCIe address space, restricting the number of NVMe drives; Dorado’s disk enclosures use independent PCIe address domains to isolate controller and disk address spaces, allowing a larger number of drives.

NVMe disk enclosures connect to controllers via 100 Gb RDMA ports, providing high‑bandwidth, low‑latency transmission channels.

3. SSD Disk Introduction

Dorado uses self‑developed HSSD SSDs, consisting of controller units, host interfaces, DRAM, and NAND flash cells. NAND flash read/write operations involve erase, program, and read cycles, with block‑level wear monitoring and garbage collection.

Block: smallest erasable unit, composed of multiple pages. Page: smallest programmable/readable unit, typically 16 KB.

Wear leveling ensures even P/E cycles across blocks, using dynamic and static strategies to prolong NAND flash lifespan.

4. Bad Block Management

Bad blocks are identified based on erase cycles, error types, and frequency; XOR redundancy and spare space allow replacement of bad blocks to maintain data integrity throughout the SSD lifecycle.

5. Data Redundancy Protection

Data is protected using ECC and CRC in DRAM, LDPC and CRC in NAND flash, and XOR redundancy across flash chips to prevent data loss from chip failures.

6. SAS and NVMe Protocol Comparison

NVMe simplifies the software stack by removing the SCSI layer, reducing protocol interactions, and using PCIe directly for lower latency and higher concurrency (64 K queues). SAS requires more protocol steps, leading to higher latency.

7. SCM (Storage Class Memory) Overview

SCM sits between DRAM and NAND flash, offering lower latency than NAND and higher capacity than DRAM. Main SCM types include XL‑NAND, Intel PCM 3D XPoint, and Z‑NAND, with XL‑Flash achieving read latencies as low as 4 µs and write latencies of 75 µs.

8. X86 vs ARM Choice

X86 uses a complex CISC instruction set for higher performance, while ARM employs a simplified RISC set for smaller, more efficient designs, supporting 64‑bit operations and offering better power‑efficiency for storage controllers.

Reference: OceanStor Dorado flash product introduction.

storage architecturessdNVMeRDMAData Redundancywear levelingall-flash storage
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.