How to Deploy Rook‑Ceph on Kubernetes: Step‑by‑Step Guide
This guide walks through installing the open‑source cloud‑native storage orchestrator Rook, creating a Ceph cluster on a Kubernetes environment, configuring the Ceph dashboard, deploying the Rook toolbox, and setting up a StorageClass with RBD block storage, including troubleshooting tips and essential commands.
Introduction
Rook is an open‑source cloud‑native storage orchestrator that provides a platform, framework and support for various storage solutions to integrate natively with cloud‑native environments.
It turns distributed storage systems into self‑managing, self‑scaling, self‑healing services, automating tasks such as deployment, bootstrapping, configuration, scaling, upgrading, migration, disaster recovery, monitoring and resource management.
In short, Rook consists of a set of Kubernetes Operators that fully control the deployment, management and automatic recovery of multiple data storage solutions such as Ceph, EdgeFS, MinIO and Cassandra.
The most stable storage supported by Rook is Ceph; this guide shows how to use Rook to create and maintain a Ceph cluster as persistent storage for Kubernetes.
Environment Preparation
Kubernetes can be deployed via KubeSphere; a high‑availability installation is used. For public‑cloud installations refer to the multi‑node installation documentation.
Note: each of the nodes kube‑node5, kube‑node6 and kube‑node7 has two data disks.
kube-master1 Ready master 118d v1.17.9
kube-master2 Ready master 118d v1.17.9
kube-master3 Ready master 118d v1.17.9
kube-node1 Ready worker 118d v1.17.9
kube-node2 Ready worker 118d v1.17.9
kube-node3 Ready worker 111d v1.17.9
kube-node4 Ready worker 111d v1.17.9
kube-node5 Ready worker 11d v1.17.9
kube-node6 Ready worker 11d v1.17.9
kube-node7 Ready worker 11d v1.17.9Ensure that the lvm2 package is installed on all worker nodes before proceeding.
Deploy Rook and Ceph Cluster
Clone the Rook repository
$ git clone -b release-1.4 https://github.com/rook/rook.gitChange to the example directory
$ cd /root/ceph/rook/cluster/examples/kubernetes/cephCreate the CRDs and operator
$ kubectl create -f common.yaml -f operator.yaml
# common.yaml defines permissions and CRD resources
# operator.yaml contains the rook‑ceph‑operator deploymentCreate the Ceph cluster
$ kubectl create -f cluster.yaml
# The default cluster will automatically discover and initialize empty disks on the nodes (minimum 3 nodes, each with at least one free disk).Check pod status $ kubectl get pod -n rook-ceph -o wide The output shows all component pods; pods whose name starts with rook-ceph-osd-prepare automatically detect newly attached disks and trigger OSD creation.
Configure Ceph Dashboard
The Ceph Dashboard is a built‑in web UI for monitoring and managing the cluster. By default it is exposed as a ClusterIP service, which is not reachable from outside.
$ kubectl apply -f dashboard-external-http.yaml apiVersion: v1
kind: Service
metadata:
name: rook-ceph-mgr-dashboard-external-https
namespace: rook-ceph
labels:
app: rook-ceph-mgr
rook_cluster: rook-ceph
spec:
ports:
- name: dashboard
port: 7000
protocol: TCP
targetPort: 7000
selector:
app: rook-ceph-mgr
rook_cluster: rook-ceph
type: NodePortNote: port 8443 is for HTTPS and requires certificates; this tutorial only configures HTTP on port 7000.
Check the service status:
$ kubectl get svc -n rook-ceph
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rook-ceph-mgr-dashboard-external-https NodePort 10.233.56.73 <none> 7000:31357/TCP 12d
...Access the dashboard via the NodePort: http://{master1-ip:31357} Retrieve the dashboard password:
$ kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echoDeploy Rook Toolbox
The toolbox provides a container with common debugging tools. $ kubectl apply -f toolbox.yaml Enter the toolbox to inspect the Ceph cluster:
$ kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -- bash
$ ceph -s
cluster:
id: 1457045a-4926-411f-8be8-c7a958351a38
health: HEALTH_WARN
mon a is low on available space
2 osds down
...Deploy StorageClass (RBD Block Storage)
Ceph offers object storage (RADOSGW), block storage (RBD) and file system storage (CephFS). RBD is the most stable and commonly used block storage type.
# Apply the StorageClass
$ kubectl apply -f storageclass.yamlCreate a PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: rook-ceph-blockCreate a pod that uses the PVC:
apiVersion: v1
kind: Pod
metadata:
name: csirbd-demo-pod
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- name: mypvc
mountPath: /var/lib/www/html
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: rbd-pvc
readOnly: falseVerify the status of the pod, PVC and PV using kubectl get pod,pvc,pv -n rook-ceph.
Summary
For newcomers to Rook + Ceph, the deployment involves many steps and potential pitfalls; this record aims to help by providing a complete walkthrough.
FAQ
Ceph reports no available OSD disks: Ensure the disks are clean; use a script to zap partition tables and remove leftover mappings.
Supported Ceph storage types: RBD block storage, CephFS file storage, and S3‑compatible object storage.
How to troubleshoot deployment issues: Consult the official Rook and Ceph documentation.
Dashboard access fails: Open the NodePort in the cloud provider’s security group.
References
https://rook.github.io/docs/rook/
https://docs.ceph.com/en/pacific/
Qingyun Technology Community
Official account of the Qingyun Technology Community, focusing on tech innovation, supporting developers, and sharing knowledge. Born to Learn and Share!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
