Operations 7 min read

How to Enable Ceph NFS Service with nfs-ganesha: Step‑by‑Step Guide

This article walks through configuring Ceph to provide NFS services using nfs‑ganesha, covering module checks, cluster creation, export setup, client mounting, data verification, and high‑availability configuration with haproxy and keepalived, complete with command‑line examples.

Raymond Ops
Raymond Ops
Raymond Ops
How to Enable Ceph NFS Service with nfs-ganesha: Step‑by‑Step Guide

Ceph provides NFS service

Ceph can provide NFS in addition to CephFS, making client mounting easier.

1. Preparation

1.1 Check module

<code># ceph mgr module ls | grep nfs
"nfs"</code>

If not enabled, enable it:

<code># ceph mgr module enable nfs</code>

1.2 Check for nfs‑ganesha cluster

<code># ceph nfs cluster ls</code>

2. Create NFS service

Using automatic creation:

<code># ceph nfs cluster create 1 "ceph01 ceph02"
NFS Cluster Created Successfully
# ceph nfs cluster ls
1</code>

The number after

create

is the cluster ID and can be changed.

The quoted hosts indicate on which nodes the processes run.

2.1 Ceph NFS architecture

Ceph’s NFS uses the nfs‑ganesha component, while CephFS is provided by the MDS component. nfs‑ganesha acts as a translator, converting NFS requests to CephFS operations.

Client storage request → nfs‑ganesha → MDS

It can also store to RGW, though support is limited.

3. NFS export

3.1 Create CephFS

<code># ceph fs volume create cephfs
# ceph osd pool ls
device_health_metrics
.nfs
cephfs.cephfs.meta
cephfs.cephfs.data</code>

3.2 Export

<code># ceph nfs export create cephfs --cluster-id 1 --pseudo-path /cephfs --fsname cephfs --path=/
{
  "bind": "/cephfs",
  "fs": "cephfs",
  "path": "/",
  "cluster": "1",
  "mode": "RW"
}</code>

Creates an NFS export from CephFS with ID 1 and pseudo‑path /cephfs.

--fsname

specifies the CephFS name.

--path

is the CephFS path.

3.3 Client mount

<code># mount -t nfs 172.16.1.20:/cephfs /mnt
# df | grep mnt
172.16.1.20:/cephfs  59736064 0 59736064 0% /mnt</code>

Replace the IP with your NFS server’s address.

3.4 Verification

Authorize a CephFS client and mount it directly to verify data visibility.

<code># ceph fs authorize cephfs client.cephfs / rw -o ceph.client.cephfs.keyring
# cat ceph.client.cephfs.keyring
AQBTNHFmDhxSABAAqB69R7Y3Rb89LA06R0pfmw==
# mount -t ceph 172.16.1.20:6789:/ /mnt -o name=cephfs,secretfile=./ceph.client.cephfs.keyring
# echo hello > cephfs</code>

List the NFS mount to see the file:

<code># ls
cephfs
# cat cephfs
hello</code>

4. High‑availability nfs‑ganesha

Ceph can deploy HA nfs‑ganesha using haproxy and keepalived. Example service specification:

<code>service_type: ingress
service_id: nfs.1   # NFS ID
hosts:
  - ceph01
  - ceph02
count_per_host: 1   # Number of NFS processes per node
frontend_port: 20490   # Changed from default 2049 due to conflict
monitor_port: 9000
virtual_ip: 172.16.1.100/24</code>

Note: Similar HA setup can be applied to RGW.

LinuxstorageCephNFSHAnfs-ganesha
Raymond Ops
Written by

Raymond Ops

Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.