Cloud Native 7 min read

Deploying CephFS Static and Dynamic PVs in Kubernetes

This guide explains how to configure both static and dynamic PersistentVolumes for CephFS in Kubernetes, covering PV and PVC definitions, deployment integration, the cephfs‑provisioner StorageClass, and common pitfalls such as path and permission issues.

360 Zhihui Cloud Developer
360 Zhihui Cloud Developer
360 Zhihui Cloud Developer
Deploying CephFS Static and Dynamic PVs in Kubernetes

Introduction

The previous article introduced Ceph RBD in Kubernetes; this one focuses on using CephFS as a volume source.

Static PV

Kubernetes supports static and dynamic PV provisioning. With a static PV, the cluster admin creates the PV object beforehand.

<code>apiVersion: v1
kind: PersistentVolume
metadata:
  name: cephfs-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  cephfs:
    monitors:
      - 192.168.0.3:6789
    user: kube
    secretRef:
      name: secret-for-cephfs
    readOnly: false
  persistentVolumeReclaimPolicy: Recycle
</code>

Create the PV:

<code>$ kubectl create -f cephfs-pv.yaml -n cephfs
persistentvolume "cephfs-pv" created
$ kubectl get pv -n cephfs
</code>

PVC for the static PV

<code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cephfs-pv-claim
  namespace: cephfs
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
</code>

Using the PV in a Deployment

<code>metadata:
  name: cephfs-pvc
  namespace: cephfs
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: cephfs-pvc
    spec:
      containers:
        - name: nginx
          image: busybox:latest
          volumeMounts:
            - name: cephfs-pv
              mountPath: /data/cephfs
              readOnly: false
      volumes:
        - name: cephfs
          persistentVolumeClaim:
            claimName: cephfs-pv-claim
</code>

Both pods in the deployment can read and write the same CephFS volume.

Dynamic PV

For larger clusters, dynamic provisioning avoids manual PV creation. The community provides external-storage/cephfs as a solution.

Deploying the cephfs‑provisioner

The cephfs‑provisioner implements a StorageClass that creates PVs on demand.

It consists of two parts:

cephfs‑provisioner.go – watches PVC events and calls a Python script to create PVs.

cephfs_provisioner.py – Python wrapper that interacts with CephFS (create, delete, list volumes).

Create the StorageClass:

<code># kubectl create -f class.yaml
# kubectl get sc -n cephfs
NAME    PROVISIONER      AGE
cephfs  ceph.com/cephfs  33d
</code>

Deploy the provisioner with RBAC permissions (commands omitted for brevity). Once deployed, creating a PVC automatically triggers PV creation.

<code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-1
  annotations:
    volume.beta.kubernetes.io/storage-class: "cephfs"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
</code>

After applying the PVC, a corresponding PV is created and bound.

Common Pitfalls

Path configuration : The provisioner defaults to creating volumes under /kubernetes/volumes . To change the top‑level or sub‑directory, modify VOLUME_GROUP , DEFAULT_VOL_PREFIX , and related constants.

<code>POOL_PREFIX = "fsvolume_"
DEFAULT_VOL_PREFIX = "/volumes"
DEFAULT_NS_PREFIX = "fsvolumens_"
</code>

Permission issues :

Mounting the root directory requires admin privileges; the provisioner was patched to allow mounting a sub‑path.

Read/write permissions may fail on Ceph versions without namespace support; the namespace‑related logic was removed to avoid “input/output error”.

Refer to the original article for the full source code links.

KubernetesPersistentVolumeStorageClassCephFSDynamicProvisioning
360 Zhihui Cloud Developer
Written by

360 Zhihui Cloud Developer

360 Zhihui Cloud is an enterprise open service platform that aims to "aggregate data value and empower an intelligent future," leveraging 360's extensive product and technology resources to deliver platform services to customers.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.