Deploy Ceph RBD Storage in Kubernetes: Step‑by‑Step Guide
This guide walks through installing ceph-common, creating a Ceph RBD pool and image, configuring Kubernetes Deployment and PersistentVolume resources, and setting up dynamic provisioning with a StorageClass to enable block storage for containers.
Using RBD as Storage in Kubernetes
To use RBD as a backend storage for Kubernetes, first install
ceph-commonon the nodes.
1. Create RBD in Ceph Cluster
Prepare a pool and an image in the Ceph cluster before using it in Kubernetes.
<code># ceph osd pool create pool01
# ceph osd pool application enable pool01 rbd
# rbd pool init pool01
# rbd create pool01/test --size 10G --image-format 2 --image-feature layering
# rbd info pool01/test
</code>2. Write Kubernetes YAML Files
Define a Deployment that mounts the RBD image.
<code>apiVersion: apps/v1
kind: Deployment
metadata:
name: rbd
spec:
replicas: 1
selector:
matchLabels:
app: rbd
template:
metadata:
labels:
app: rbd
spec:
volumes:
- name: test
rbd:
fsType: xfs
image: test
pool: pool01
user: admin
keyring: /root/admin.keyring
monitors:
- 192.168.200.230:6789
readOnly: false
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- name: test
mountPath: /usr/share/nginx/html
</code>Apply the manifest and verify the pod is running:
<code># kubectl get pods
NAME READY STATUS RESTARTS AGE
rbd-888b8b747-n56wr 1/1 Running 0 26m
</code>If the pod stays in
ContainerCreating, ensure
ceph-commonis installed and the
keyringand
ceph.confare available on the node.
2.1 Enter the Container to Check Mount
<code># kubectl exec -it rbd-5db4759c-nj2b4 -- bash
root@rbd-5db4759c-nj2b4:/# df -hT | grep /dev/rbd0
/dev/rbd0 xfs 10G 105M 9.9G 2% /usr/share/nginx/html
</code>The RBD device is formatted as XFS and mounted at
/usr/share/nginx/html.
2.2 Modify Content Inside the Container
<code># cd /usr/share/nginx/html
# echo 123 > index.html
# chmod 644 index.html
# exit
</code>Access the service to confirm the content:
<code># curl 192.168.166.131
123
</code>Delete the pod and let Kubernetes recreate it to verify persistence:
<code># kubectl delete pod rbd-5db4759c-nj2b4
pod "rbd-5db4759c-nj2b4" deleted
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
rbd-5db4759c-v9cgm 1/1 Running 0 40s 192.168.166.132 node1
# curl 192.168.166.132
123
</code>3. Use PersistentVolume (PV) with RBD
Developers often prefer to request storage via a PVC rather than writing low‑level YAML for each volume.
3.1 Create a PersistentVolumeClaim (PVC)
<code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: 8Gi
</code>The PVC is pending until a matching PV is created.
3.2 Create a PersistentVolume (PV)
<code>apiVersion: v1
kind: PersistentVolume
metadata:
name: rbdpv
spec:
capacity:
storage: 8Gi
volumeMode: Block
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
mountOptions:
- hard
- nfsvers=4.1
rbd:
fsType: xfs
image: test
pool: rbd
user: admin
keyring: /etc/ceph/ceph.client.admin.keyring
monitors:
- 172.16.1.33
readOnly: false
</code>Bind the PV to the PVC:
<code># kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myclaim Bound rbdpv 8Gi RWO 11s
</code>3.3 Use the PVC in a Pod
<code>apiVersion: v1
kind: Pod
metadata:
name: pvc-pod
spec:
volumes:
- name: rbd
persistentVolumeClaim:
claimName: myclaim
readOnly: false
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeDevices:
- devicePath: /dev/rbd0
name: rbd
</code>Verify the block device appears inside the container:
<code># kubectl exec -it pvc-pod -- bash
root@pvc-pod:/# ls /dev/rbd0
/dev/rbd0
</code>4. Dynamic Provisioning
Use a StorageClass to let Kubernetes automatically create PVs for PVCs.
4.1 Install Ceph CSI Driver
<code># git clone https://gitee.com/yftyxa/ceph-csi.git
# cd ceph-csi/deploy/
# kubectl create ns csi
# kubectl apply -f . -n csi
</code>Adjust
csi-rbdplugin-provisioner.yaml(set
--extra-create-metadata=false) and the ConfigMap
csi-config-map.yamlto match your Ceph monitors and cluster ID.
4.2 Create a Secret with Ceph Credentials
<code>apiVersion: v1
kind: Secret
metadata:
name: csi-secret
namespace: csi
stringData:
userID: admin
userKey: AQC4QnJmng4HIhAA42s27yOflqOBNtEWDgEmkg==
adminID: admin
adminKey: AQC4QnJmng4HIhAA42s27yOflqOBNtEWDgEmkg==
</code>4.3 Define the StorageClass
<code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
clusterID: c1f213ae-2de3-11ef-ae15-00163e179ce3
pool: rbd
imageFeatures: "layering"
csi.storage.k8s.io/provisioner-secret-name: csi-secret
csi.storage.k8s.io/provisioner-secret-namespace: csi
csi.storage.k8s.io/controller-expand-secret-name: csi-secret
csi.storage.k8s.io/controller-expand-secret-namespace: csi
csi.storage.k8s.io/node-stage-secret-name: csi-secret
csi.storage.k8s.io/node-stage-secret-namespace: csi
csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- discard
</code>4.4 Set the StorageClass as Default (optional)
<code># kubectl edit sc csi-rbd-sc
# add annotation:
# storageclass.kubernetes.io/is-default-class: "true"
</code>5. Test Dynamic Provisioning
5.1 Create a PVC without specifying a StorageClass
<code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sc-pvc1
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: 20Gi
</code>Because the
csi-rbd-scStorageClass is default, the PVC is bound automatically:
<code># kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
sc-pvc1 Bound pvc-167cf73b-4983-4c28-aa98-bb65bb966649 20Gi RWO csi-rbd-sc 6s
</code>The corresponding RBD image can be seen in the Ceph cluster:
<code># rbd ls
csi-vol-56e37046-b9d7-4ef1-a534-970a766744f3
# rbd info csi-vol-56e37046-b9d7-4ef1-a534-970a766744f3
size 20 GiB in 3840 objects
features: layering
...</code>This completes the end‑to‑end setup of Ceph RBD storage for Kubernetes, covering manual PV/PVC usage and fully automated dynamic provisioning.
Raymond Ops
Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.