Operations 7 min read

Step-by-Step Guide to Deploying Ceph Block Storage on CentOS 7

This article walks through the hardware and software preparation, configuration, deployment, testing, and troubleshooting steps required to set up a Ceph distributed block storage cluster on CentOS 7, including node roles, RAID setups, and Ceph‑deploy commands.

MaGe Linux Operations
MaGe Linux Operations
MaGe Linux Operations
Step-by-Step Guide to Deploying Ceph Block Storage on CentOS 7

Hardware Environment Preparation

Six machines are prepared: three physical servers as monitor nodes (ceph‑mon1, ceph‑mon2, ceph‑mon3), two physical servers as OSD nodes (ceph‑osd1, ceph‑osd2), and one virtual machine as the admin node (ceph‑adm). Ceph requires an odd number of monitors, at least three; the admin node is optional and can be co‑located with a monitor.

ADM server: low‑spec VM for management only.

MON servers: two disks in RAID1 for the OS.

OSD servers: ten 4 TB disks, each serving as an OSD with a dedicated journal. Two large SSDs are partitioned into five sections each to host the journals, while two small SSDs hold the OS in RAID1.

Software Environment Preparation

All Ceph nodes run CentOS 7.1 (minimal ISO) with XFS file systems on data disks and RAID1 for the OS. Basic configurations on each node include disabling SELinux, opening firewall ports, and synchronizing time.

On each OSD server, the ten SAS disks are partitioned and formatted with XFS; the two SSDs for journals are split into five partitions each, left unformatted for Ceph to manage.

A helper script parted.sh automates the partitioning of the ten data disks and the two SSDs (e.g., /dev/sda‑/dev/sl for data disks, /dev/sdc and /dev/sdf for journal SSDs).

Generate an SSH key on the admin node with an empty passphrase, copy it to all Ceph nodes, and verify password‑less SSH connectivity.

Ceph Deployment

Create a working directory on the admin node and use ceph‑deploy to initialize the cluster, specifying monitor nodes. The command generates ceph.conf, ceph.log, and keyring files.

Install Ceph on every node via ceph‑deploy install, then initialize the monitor nodes with ceph‑deploy mon create-initial.

Inspect the OSD disks, prepare them, and create OSDs so that each disk is paired with its journal partition.

Synchronize the generated configuration files from the admin node to all other nodes to ensure consistent settings.

Testing

Verify the cluster health; calculate the appropriate PG count using the formula Total PGs = (#OSDs * 100) / pool size. For 20 OSDs with a pool size of 2, set pg_num and pgp_num to 1024, aiming for a HEALTH_OK status.

Record the final PG settings in ceph.conf and distribute the updated file to all nodes.

Troubleshooting

If network issues arise, ensure password‑less SSH works between nodes and that firewalls allow required ports. Initial installations may encounter various problems, but with experience they can be resolved as Ceph is gradually moved into production.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

operationsdistributed storageCephblock storageCentOS 7
MaGe Linux Operations
Written by

MaGe Linux Operations

Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.