Step‑by‑Step Guide to Integrate Ceph RGW Object Storage with Kubernetes
This tutorial walks you through creating Ceph RGW users, configuring Kubernetes secrets, deploying the CSI‑S3 driver, setting up a storage class and persistent volume claim, and verifying the integration, while highlighting experimental status and important tips for production use.
This article continues the series on integrating Ceph with Kubernetes, focusing on Ceph RGW (object storage) integration.
Tip: Ceph object storage integration with Kubernetes is still experimental and should not be used in any production environment.
Ceph Related Information
1. Create a Ceph object storage user:
<code>$ radosgw-admin user create --uid=jiaxzeng --display-name="Linux运维智行录"</code>2. Grant user capabilities:
<code>$ radosgw-admin caps add --uid jiaxzeng --caps='buckets=*'
$ radosgw-admin caps add --uid jiaxzeng --caps='metadata=*'</code>3. Retrieve user information:
<code>$ radosgw-admin user info --uid=jiaxzeng</code>Tip: Save the output
access_keyand
secret_keyvalues for later use.
Kubernetes Integration of Ceph RGW
1. Download the CSI‑S3 driver files:
<code>$ curl -LO https://github.com/ctrox/csi-s3/archive/refs/heads/master.zip
$ unzip master.zip
$ sudo mkdir -p /etc/kubernetes/addons/cephrgw</code>2. Create a secret with Ceph RGW credentials:
<code>$ cat <<'EOF' | sudo tee /etc/kubernetes/addons/cephrgw/csi-s3-secret.yaml > /dev/null
apiVersion: v1
kind: Secret
metadata:
namespace: kube-system
name: csi-s3-secret
stringData:
accessKeyID: IFLMJ6L2TTQ7EZWXLQD5
secretAccessKey: vvjNUneyybatBn3C01iA3XM11S5BwudElT3TVN4V
endpoint: http://172.139.20.100:7480
region: ""
EOF
$ kubectl apply -f /etc/kubernetes/addons/cephrgw/csi-s3-secret.yaml</code>3. Deploy the provisioner:
<code>$ sudo cp csi-s3-master/deploy/kubernetes/provisioner.yaml /etc/kubernetes/addons/cephrgw
$ sudo sed -ri '[email protected]/[email protected]:5000/library@g' /etc/kubernetes/addons/cephrgw/provisioner.yaml
$ sudo sed -ri 's@ ctrox@ 172.139.20.170:5000/library@g' /etc/kubernetes/addons/cephrgw/provisioner.yaml
$ kubectl create -f /etc/kubernetes/addons/cephrgw/provisioner.yaml</code>4. Deploy the attacher (including fixes for pod mount failures):
<code>$ sudo cp csi-s3-master/deploy/kubernetes/attacher.yaml /etc/kubernetes/addons/cephrgw
$ sudo sed -ri '[email protected]/[email protected]:5000/library@g' /etc/kubernetes/addons/cephrgw/attacher.yaml
$ kubectl create -f /etc/kubernetes/addons/cephrgw/attacher.yaml
# Resolve mount failures by adjusting ClusterRole permissions and image version as shown in the diff output.</code>5. Deploy the CSI‑S3 driver:
<code>$ sudo cp csi-s3-master/deploy/kubernetes/csi-s3.yaml /etc/kubernetes/addons/cephrgw
$ sudo sed -ri '[email protected]/[email protected]:5000/library@g' /etc/kubernetes/addons/cephrgw/csi-s3.yaml
$ sudo sed -ri 's@ ctrox@ 172.139.20.170:5000/library@g' /etc/kubernetes/addons/cephrgw/csi-s3.yaml
$ kubectl create -f /etc/kubernetes/addons/cephrgw/csi-s3.yaml</code>6. Create a StorageClass for Ceph RGW:
<code>$ cat <<'EOF' | sudo tee /etc/kubernetes/addons/cephrgw/storageclass.yaml > /dev/null
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ceph-rgw-storage
provisioner: ch.ctrox.csi.s3-driver
parameters:
mounter: goofys
csi.storage.k8s.io/provisioner-secret-name: csi-s3-secret
csi.storage.k8s.io/provisioner-secret-namespace: kube-system
csi.storage.k8s.io/controller-publish-secret-name: csi-s3-secret
csi.storage.k8s.io/controller-publish-secret-namespace: kube-system
csi.storage.k8s.io/node-stage-secret-name: csi-s3-secret
csi.storage.k8s.io/node-stage-secret-namespace: kube-system
csi.storage.k8s.io/node-publish-secret-name: csi-s3-secret
csi.storage.k8s.io/node-publish-secret-namespace: kube-system
reclaimPolicy: Retain
allowVolumeExpansion: true
EOF
$ kubectl apply -f /etc/kubernetes/addons/cephrgw/storageclass.yaml</code>Verification
1. Create a PersistentVolumeClaim:
<code>$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-ceph-rgw-pvc
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: ceph-rgw-storage
EOF
$ kubectl get pvc test-ceph-rgw-pvc</code>2. Deploy a pod that uses the PVC:
<code>$ cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: tools
spec:
replicas: 2
selector:
matchLabels:
app: tools
template:
metadata:
labels:
app: tools
spec:
containers:
- name: tools
image: registry.cn-guangzhou.aliyuncs.com/jiaxzeng6918/tools:v1.1
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: test-ceph-rgw-pvc
EOF
$ kubectl get pod -l app=tools -o wide
$ kubectl exec -it <pod-name> -- df -h /data</code>Tip: Ceph RGW PVC cannot enforce storage size limits. CephFS access modes support ReadWriteOnce, ReadOnlyMany, and ReadWriteMany (see Kubernetes documentation).
Reference articles: • Container storage S3 interface: https://github.com/ctrox/csi-s3 • Mount failure issues: https://github.com/ctrox/csi-s3/issues/80 • PVC merge fix: https://github.com/ctrox/csi-s3/pull/70/files
Kubernetes integration with external storage expands containerized applications' storage capabilities, offering flexible and efficient data management; as cloud‑native technologies evolve, more storage solutions are expected to integrate seamlessly with Kubernetes.
Linux Ops Smart Journey
The operations journey never stops—pursuing excellence endlessly.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.