How to Set Up Ceph NFS Service for Easy Client Mounting
This guide walks through enabling the Ceph NFS manager module, creating an NFS cluster, exporting a CephFS directory via NFS, mounting it from a client, verifying data consistency, and configuring high‑availability for the nfs‑ganesha service using haproxy and keepalived.
Prerequisites
Ensure the Ceph manager NFS module is loaded and that an nfs‑ganesha cluster is available.
# ceph mgr module ls | grep nfs
# ceph mgr module enable nfsVerify the existence of an NFS cluster:
# ceph nfs cluster lsCreate NFS Service
Use the automatic creation command to start an NFS cluster. The numeric ID after create is the cluster ID and can be any integer you choose.
# ceph nfs cluster create 1 "ceph01 ceph02"
NFS Cluster Created Successfully
# ceph nfs cluster ls
1The 1 after create is the cluster ID.
The quoted host list ( ceph01 ceph02) specifies the nodes on which the NFS daemons will run.
Ceph NFS Architecture
Ceph’s NFS service uses the nfs‑ganesha component, which translates NFS requests into CephFS operations. The data path is:
Client request → nfs‑ganesha → MDS
Optionally, data can be stored in RGW, but that integration is limited.
NFS Export (Backend Storage Configuration)
3.1 Create CephFS (if not already present)
# ceph fs volume create cephfs
# ceph osd pool ls
device_health_metrics
.nfs
cephfs.cephfs.meta
cephfs.cephfs.data3.2 Export CephFS via NFS
# ceph nfs export create cephfs \
--cluster-id 1 \
--pseudo-path /cephfs \
--fsname cephfs \
--path=/
{
"bind": "/cephfs",
"fs": "cephfs",
"path": "/",
"cluster": "1",
"mode": "RW"
} --cluster-id 1refers to the NFS cluster created earlier. --fsname cephfs selects the CephFS to export. --path / specifies the CephFS root to expose.
3.3 Client Mount Test
# mount -t nfs 172.16.1.20:/cephfs /mnt
# df | grep mnt
172.16.1.20:/cephfs 59736064 0 59736064 0% /mntReplace the IP with your own NFS service address.
3.4 Verify Data Visibility
Authorize a CephFS client and mount the underlying CephFS to write a test file.
# ceph fs authorize cephfs client.cephfs / rw -o ceph.client.cephfs.keyring
# cat ceph.client.cephfs.keyring
AQBTNHFmDhxSABAAqB69R7Y3Rb89LA06R0pfmw==
# mount -t ceph 172.16.1.20:6789:/ /mnt -o name=cephfs,secretfile=./ceph.client.cephfs.keyring
# echo hello > /mnt/cephfs
# ls /mnt
cephfs
# cat /mnt/cephfs
helloThe file written via CephFS is visible through the NFS mount, confirming that NFS is correctly backed by CephFS.
High‑Availability for nfs‑ganesha
Ceph can provide HA for the NFS service using haproxy and keepalived. Example service definition (YAML‑style) for the HA layer:
service_type: ingress
service_id: nfs.1
placement:
hosts:
- ceph01
- ceph02
count_per_host: 1
spec:
backend_service: nfs.1
frontend_port: 20490 # changed from default 2049 to avoid conflict
monitor_port: 9000
virtual_ip: 172.16.1.100/24Only the frontend_port adjustment is highlighted; the same HA pattern can be applied to RGW services.
Reference: https://www.cnblogs.com/fsdstudy/p/18254504 (original article).
Go Development Architecture Practice
Daily sharing of Golang-related technical articles, practical resources, language news, tutorials, real-world projects, and more. Looking forward to growing together. Let's go!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
