Comparison of Ceph and Huawei OceanStor 9000 Distributed Storage Architectures
This article compares the open‑source Ceph storage system and Huawei's OceanStor 9000 commercial SDS, detailing their service components, software stacks, OpenStack compatibility, and differing strengths in scalability, client architecture, and management for cloud and enterprise storage scenarios.
Ceph is a highly regarded open‑source distributed software‑defined storage (SDS) system that provides object, block, and file storage, developed in C++ under the LGPL license, originally created by Sage Weil and later commercialized by Red Hat.
Huawei OceanStor 9000 is a commercial distributed file system in China, built on the CSS‑F architecture with a fully symmetric design, widely used in media, HPC, big data, and video surveillance.
Ceph basic service architecture
Ceph consists of Object Storage Devices (OSDs), Monitors, and Metadata Servers (MDS). OSDs store data and handle replication, recovery and rebalancing; Monitors track cluster health and CRUSH map; MDS provides POSIX metadata for CephFS.
At least one Monitor and two OSD roles are required for a minimal Ceph cluster, with data typically stored with two replicas.
OceanStor 9000 basic service architecture
OceanStor 9000 is composed of many sub‑clusters (ISM, CMS, Monitoring) running on ordinary storage nodes in active‑standby mode. The Client Agent (CA) parses file‑system protocols and performs file slicing and data composition, supporting CIFS, NFS and proprietary clients.
Its Metadata Service (MDS) stores metadata and user data on storage nodes with high‑reliability multi‑replica storage.
Object services are provided through OSC and OMD components, while the OBS layer offers key‑value based object storage that underpins NAS and object services.
Ceph software stack
(1) RADOS – the reliable, autonomic, distributed object store that underlies all Ceph services.
(2) librados – C/C++ libraries that expose native RADOS APIs for direct object, block, and file application development.
(3) High‑level interfaces – RADOS Gateway (S3/Swift compatible), RBD (block device), and CephFS (POSIX file system).
(4) Client layer – Ceph clients built on FUSE/VFS use the CRUSH algorithm for data placement.
OceanStor 9000 software stack
(1) OBS – the foundational key‑value storage service.
(2) Data processing layer – combines NAS (MDS + CA) and object services (OSC + OMD) to fulfill client requests.
(3) Storage service layer – provides NAS and object services with value‑added features such as snapshots, replication, tiering, erasure coding, and multi‑tenant support.
OpenStack compatibility
Ceph is widely used as an OpenStack storage backend for object (Swift), block (Cinder), and image (Glance) services, with integration into KVM/QEMU and libvirt.
OceanStor 9000 plans to support OpenStack Manila NAS interfaces and already supports S3 and Swift APIs, though it currently lacks SAN support for Glance integration.
Learning summary
Ceph targets massive, PB‑scale distributed storage with a focus on block and object services, while OceanStor 9000 emphasizes NAS and object workloads for media, genomics, and HPC. Both use erasure coding and replication for reliability, but differ in client architecture, management interfaces, and ecosystem integration.
Architects' Tech Alliance
Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.