Cloud Computing 16 min read

Ceph Storage System: Architecture, Key Technologies, and Performance Evaluation Compared with Swift

The article provides a comprehensive overview of Ceph, covering its object storage concepts, architecture, core technologies, performance testing results versus Swift, and concludes with a summary of findings and future outlook for this distributed storage solution.

Baidu Waimai Technology Team
Baidu Waimai Technology Team
Baidu Waimai Technology Team
Ceph Storage System: Architecture, Key Technologies, and Performance Evaluation Compared with Swift

This article introduces the Ceph storage product from the perspectives of object storage, Ceph architecture, key technologies, and performance verification and analysis, and compares it with the Swift product.

1. Technical Background Storing massive numbers of small files requires a unified storage solution; Ceph offers a Reliable Autonomous Distributed Object Storage (RADOS) that eliminates metadata overhead and provides fault tolerance, making it a leading open‑source storage option in the OpenStack ecosystem.

2. Related Technologies Object storage is characterized by remote access, massive user support, unlimited scalability, and low cost. Ceph integrates object, block, and file storage, unlike Swift which focuses solely on object storage.

2.1 Object Storage Concept Object storage enables worldwide access via HTTP‑based protocols (e.g., S3) and supports multi‑tenant isolation and infinite capacity.

2.2 Ceph and OpenStack Relationship In OpenStack, Ceph serves as the default backend for Cinder (block storage), competes with Swift for object storage, provides cache for Glance, and can be used as Nova's local filesystem.

2.3 Ceph vs. Swift Comparison Swift, developed in Python for pure object storage, contrasts with Ceph’s C++ implementation that supports unified storage (object, block, file) and demonstrates comparable scalability and reliability.

3. Ceph Architecture Ceph is fundamentally a RADOS system offering reliability, automation, and distribution. Its features include high efficiency, unified storage, and scalability without a database layer, using Mon and OSD daemons to maintain a cluster map.

Ceph’s logical layers consist of Nova (compute) accessing block storage via librbd, Cinder managing volumes, and high‑level interfaces such as RADOS Gateway, RBD, and CephFS.

4. Key Technologies Detailed, logical, and network architectures are presented, illustrating client‑OSD interactions, placement groups (PG), CRUSH algorithm, and monitor daemons for cluster state management.

5. Performance Verification and Analysis

5.1 RBD (XFS) vs. Disk (ext4) Performance Using FIO and sysbench tests on 10 GB files with three replicas across five OSD nodes, RBD (XFS) outperformed Disk (ext4) in random read/write and mixed workloads.

5.2 Radosgw Interface Performance Rest‑bench tests showed that larger object sizes increase throughput, while smaller objects raise IOPS but reduce aggregate bandwidth; higher concurrency improves both throughput and IOPS.

5.3 MySQL on Ceph OLTP Test Sysbench tests with 5 OSD nodes showed RBD (XFS) superior in SELECT queries but inferior in UPDATE and mixed workloads compared to Disk (ext4).

6. Summary and Outlook The tests confirm that RBD (XFS) excels in random I/O, larger objects improve throughput, and concurrency benefits performance. Ceph’s rapid development and adoption suggest continued growth and broader use cases in software‑defined storage.

Distributed Systemsperformance testingCephObject StorageOpenStackRADOS
Baidu Waimai Technology Team
Written by

Baidu Waimai Technology Team

The Baidu Waimai Technology Team supports and drives the company's business growth. This account provides a platform for engineers to communicate, share, and learn. Follow us for team updates, top technical articles, and internal/external open courses.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.