Operations 6 min read

Essential Ceph Command Cheat Sheet for Cluster Management

This guide provides a concise collection of essential Ceph commands for starting services, checking health and status, managing monitors, metadata servers, and OSDs, as well as creating admin users, purging nodes, and handling crush maps, enabling administrators to efficiently operate and troubleshoot a Ceph storage cluster.

Linux Cloud Computing Practice
Linux Cloud Computing Practice
Linux Cloud Computing Practice
Essential Ceph Command Cheat Sheet for Cluster Management

Start Ceph Services

service ceph start mon.node1
service ceph start mds.node1
service ceph start osd.0

Check Cluster Health

ceph health

Output:

HEALTH_OK

Show Cluster Status

ceph -s

Typical output includes the cluster ID, health status, monitor map, OSD map, placement‑group map, and usage statistics.

Purge a Node

ceph-deploy purge node1
ceph-deploy purgedata node1

Create an Admin User and Keyring

ceph auth get-or-create client.admin mds 'allow' osd 'allow *' mon 'allow *' > /etc/ceph/ceph.client.admin.keyring

Or using the short option:

ceph auth get-or-create client.admin mds 'allow' osd 'allow *' mon 'allow *' -o /etc/ceph/ceph.client.admin.keyring

Monitor (mon) Management

View Monitor Map

ceph mon dump

Displays epoch, FSID, and the list of monitor nodes with their IPs and ports.

Remove a Monitor

ceph mon remove node1

Removes the specified monitor from the cluster.

Export Current Monitor Map

ceph mon getmap -o 1.txt

Saves the active monitor map to 1.txt.

Metadata Server (mds) Management

Show MDS Status

ceph mds stat

Shows the number of active, standby, and failed MDS daemons.

Dump MDS Map

ceph mds dump

Provides detailed MDS map information, including flags, timestamps, session settings, and per‑node details.

Object Storage Daemon (osd) Management

Check OSD Status

ceph osd stat

Reports the number of OSDs that are up and in.

Mark an OSD Down

ceph osd down 0

Marks OSD 0 as down.

Remove an OSD

ceph osd rm 0

Deletes OSD 0 from the cluster.

Remove an OSD Host from the Crush Map

ceph osd crush rm node1

Removes the host node1 from the crush map.

Query Maximum OSD Count

ceph osd getmaxosd

Typical output: max_osd = 4 in epoch 514 (default maximum is 4 OSDs).

Unpause OSDs

ceph osd unpause

Resumes OSD operations after a pause.

Illustrative Images

Ceph cluster diagram
Ceph cluster diagram
Ceph command reference
Ceph command reference
operationsLinuxStorageCommand Linecluster managementCeph
Linux Cloud Computing Practice
Written by

Linux Cloud Computing Practice

Welcome to Linux Cloud Computing Practice. We offer high-quality articles on Linux, cloud computing, DevOps, networking and related topics. Dive in and start your Linux cloud computing journey!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.