Using CephFS with Kubernetes: Static and Dynamic Persistent Volumes
This tutorial explains how to integrate CephFS storage into Kubernetes by detailing both static and dynamic PersistentVolume configurations, providing YAML examples, command-line steps, and troubleshooting tips for successful multi‑pod read‑write access.
This article introduces how to use CephFS in Kubernetes, covering both static and dynamic PersistentVolumes (PVs) and the necessary configuration files.
Static PVs are created by the cluster administrator; an example apiVersion: v1 kind: PersistentVolume metadata: name: cephfs-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteMany cephfs: monitors: - 192.168.0.3:6789 user: kube secretRef: name: secret-for-cephfs readOnly: false persistentVolumeReclaimPolicy: Recycle definition is provided, followed by commands to create the PV and a corresponding PVC:
$ kubectl create -f cephfs-pv.yaml -n cephfs persistentvolume "cephfs-pv" created $ kubectl get pv -n cephfs
For dynamic provisioning, the community‑provided cephfs-provisioner is described, including its architecture, the required StorageClass, and RBAC setup. Sample YAML for the StorageClass and PVC are included:
#kubectl create -f class.yaml # kubectl get sc -n cephfs NAME PROVISIONER AGE cephfs ceph.com/cephfs 33d
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-1 annotations: volume.beta.kubernetes.io/storage-class: "cephfs" spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi
The guide also discusses common pitfalls such as default directory paths, volume group settings, and permission issues, offering solutions like modifying VOLUME_GROUP and adjusting mount permissions to avoid admin‑only restrictions.
Overall, the tutorial provides step‑by‑step instructions and code snippets to enable read‑write access to a CephFS volume from multiple pods within a deployment.
360 Tech Engineering
Official tech channel of 360, building the most professional technology aggregation platform for the brand.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.