Cloud Computing 30 min read

Step‑by‑Step Guide to Deploy a Ceph Cluster with ceph‑deploy on CentOS 7

This tutorial walks through the complete process of preparing three CentOS 7 nodes, installing ceph‑deploy, configuring storage and network topology, and sequentially deploying monitors, managers, OSDs, MDS, RGW, enabling the Ceph dashboard, performing verification tests, and finally uninstalling the cluster.

AI Cyberspace
AI Cyberspace
AI Cyberspace
Step‑by‑Step Guide to Deploy a Ceph Cluster with ceph‑deploy on CentOS 7

Deployment Topology

Three Ceph nodes are arranged with identical storage devices and distinct IP addresses for management/public and cluster networks.

Storage Device Topology

Each node uses sda as the system disk, and sdb, sdc, sdd as OSD disks (OSD1‑OSD3).

Network Topology

Three logical networks are defined:

Deploy MGMT (merged with Public) network.

Public Network for client access (e.g., 172.18.22.234/24, .235/24, .236/24).

Cluster Network for OSD replication (e.g., 192.168.57.101/24, .102/24, .103/24).

Base System Environment

CentOS 7 kernel version checked with uname -a.

Ceph Mimic YUM repository configured (mirrors.tuna.tsinghua.edu.cn).

System updated, vim and wget installed.

Firewalld stopped and disabled; SELinux set to disabled.

/etc/hosts populated with node and OpenStack IPs.

NTP (chrony) synchronized with the OpenStack controller.

EPEL repository installed.

Password‑less SSH keys generated and copied among the three nodes.

Install ceph‑deploy Semi‑Automatic Tool

Install the tool: yum install -y ceph-deploy.

Create a working directory: mkdir -pv /opt/ceph/deploy && cd /opt/ceph/deploy.

Initialize the cluster configuration: ceph-deploy new ceph-node1 ceph-node2 ceph-node3. This creates ceph.conf, ceph.mon.keyring, and a bootstrap keyring.

Edit ceph.conf to add:

Install Ceph packages on all nodes: ceph-deploy install ceph-node1 ceph-node2 ceph-node3.

Deploy MON

Initialize monitors with ceph-deploy mon create-initial and verify each node runs ceph-mon@<node>.service.

Deploy Manager

Create manager daemons on all nodes: ceph-deploy mgr create ceph-node1 ceph-node2 ceph-node3. Verify with systemctl status ceph-mgr@<node> and check cluster health.

Deploy OSD

List available disks, then wipe the target OSD devices:

ceph-deploy disk zap ceph-node1 /dev/sdb
ceph-deploy disk zap ceph-node1 /dev/sdc
ceph-deploy disk zap ceph-node1 /dev/sdd

Create OSDs using the data devices:

ceph-deploy osd create --data /dev/sdb ceph-node1
ceph-deploy osd create --data /dev/sdc ceph-node1
ceph-deploy osd create --data /dev/sdd ceph-node1
# repeat for ceph-node2 and ceph-node3

After creation, systemctl status ceph-osd@<id> shows three OSD daemons per node, and ceph -s reports 9 OSDs up and in.

Deploy MDS

Metadata servers are created with ceph-deploy mds create ceph-node1 ceph-node2 ceph-node3 and verified via systemctl status ceph-mds@<node>.

Deploy RGW

RADOS gateways are deployed with ceph-deploy rgw create ceph-node1 ceph-node2 ceph-node3. Each node runs ceph-radosgw@rgw.<node>.

Verification Tests

List existing pools with rados lspools.

Create a new pool: ceph osd pool create test_pool 32 32.

Set replica size: ceph osd pool set test_pool size 2.

Put an object: rados -p test_pool put object1 /etc/hosts and list it.

Show placement map: ceph osd map test_pool object1.

Enable Dashboard

Enable the module: ceph mgr module enable dashboard.

Create a self‑signed certificate: ceph dashboard create-self-signed-cert.

Set admin credentials: ceph dashboard set-login-credentials admin admin.

Retrieve the URL (default https://<node>:8443/).

Uninstall Ceph

Purge packages from all nodes: ceph-deploy purge ceph-node1 ceph-node2 ceph-node3.

Remove data: ceph-deploy purgedata ceph-node1 ceph-node2 ceph-node3.

Forget keys and delete configuration files.

LinuxDashboardClusterStorageCephOpenStackCeph-deploy
AI Cyberspace
Written by

AI Cyberspace

AI, big data, cloud computing, and networking.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.