Deploy NFS Server and Configure Dynamic PV Provisioning with NFS StorageClass for Redis on Kubernetes
This guide details how to install and configure an NFS server, set up dynamic PersistentVolume provisioning using an NFS StorageClass, and deploy a Redis workload on Kubernetes with appropriate ConfigMaps, PVCs, and Deployment specifications.
1. Deploy an NFS server on a Linux host by installing nfs-utils , enabling and starting nfs-server , creating the shared directory /home/nfs , configuring /etc/exports with /home/nfs *(rw,async,no_root_squash) , reloading exports with exportfs -arv , restarting the service, and verifying the share using showmount -e .
2. Install NFS client utilities on each Kubernetes node ( yum install nfs-utils for CentOS or apt-get install nfs-common for Ubuntu) and optionally mount the share to test connectivity.
3. Create the Kubernetes resources required for dynamic NFS provisioning:
apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner namespace: default --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get","list","watch","create","delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get","list","watch","update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get","list","watch"] - apiGroups: [""] resources: ["events"] verbs: ["create","update","patch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io
Deploy the provisioner itself:
apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner --- apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: hahashen/nfs-client-provisioner:latest env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 172.16.201.209 - name: NFS_PATH value: /home/nfs volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes volumes: - name: nfs-client-root nfs: server: 172.16.201.209 path: /home/nfs
Create a StorageClass that points to the provisioner:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: fuseim.pri/ifs parameters: archiveOnDelete: "true"
Define a PersistentVolumeClaim for the Redis workload:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-redis namespace: redis spec: storageClassName: "managed-nfs-storage" accessModes: - ReadWriteMany resources: requests: storage: 100Gi
Apply the manifests with kubectl apply -f . and verify the StorageClass and PVC status using kubectl get sc and kubectl get pvc -n redis .
4. Create a ConfigMap containing a Redis configuration file (redis.conf) and a Deployment that mounts both the ConfigMap and the NFS PVC:
apiVersion: v1 kind: ConfigMap metadata: name: redis-config namespace: redis data: redis.conf: |- protected-mode no port 6379 tcp-backlog 511 timeout 0 tcp-keepalive 300 daemonize no supervised no pidfile /data/redis_6379.pid loglevel notice logfile "" databases 16 always-show-logo yes save 5 1 save 300 10 save 60 10000 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump.rdb dir /data replica-serve-stale-data yes replica-read-only yes repl-diskless-sync no repl-diskless-sync-delay 5 repl-disable-tcp-nodelay no replica-priority 100 requirepass 123 lazyfree-lazy-eviction no lazyfree-lazy-expire no lazyfree-lazy-server-del no replica-lazy-flush no appendonly no appendfilename "appendonly.aof" appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb aof-load-truncated yes aof-use-rdb-preamble yes lua-time-limit 5000 slowlog-log-slower-than 10000 slowlog-max-len 128 latency-monitor-threshold 0 notify-keyspace-events "" hash-max-ziplist-entries 512 hash-max-ziplist-value 64 list-max-ziplist-size -2 list-compress-depth 0 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 hll-sparse-max-bytes 3000 stream-node-max-bytes 4096 stream-node-max-entries 100 activerehashing yes client-output-buffer-limit normal 0 0 0 client-output-buffer-limit replica 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 hz 10 dynamic-hz yes aof-rewrite-incremental-fsync yes rdb-save-incremental-fsync yes
apiVersion: apps/v1 kind: Deployment metadata: name: redis namespace: redis spec: replicas: 3 selector: matchLabels: app: redis template: metadata: labels: app: redis spec: initContainers: - name: system-init image: busybox:1.32 command: ["sh","-c","echo 2048 > /proc/sys/net/core/somaxconn && echo never > /sys/kernel/mm/transparent_hugepage/enabled"] securityContext: privileged: true runAsUser: 0 volumeMounts: - name: sys mountPath: /sys containers: - name: redis image: redis:5.0.8 command: ["sh","-c","redis-server /usr/local/etc/redis/redis.conf"] ports: - containerPort: 6379 resources: limits: cpu: 1000m memory: 1024Mi requests: cpu: 1000m memory: 1024Mi livenessProbe: tcpSocket: port: 6379 initialDelaySeconds: 300 timeoutSeconds: 1 periodSeconds: 10 readinessProbe: tcpSocket: port: 6379 initialDelaySeconds: 5 timeoutSeconds: 1 periodSeconds: 10 volumeMounts: - name: data mountPath: /data - name: config mountPath: /usr/local/etc/redis/redis.conf subPath: redis.conf volumes: - name: data persistentVolumeClaim: claimName: nfs-redis - name: config configMap: name: redis-config - name: sys hostPath: path: /sys
Verify the deployment by listing the Redis pods and service with kubectl get pod -n redis and kubectl get svc -n redis , confirming three running replicas and a NodePort service exposing port 6379.
Practical DevOps Architecture
Hands‑on DevOps operations using Docker, K8s, Jenkins, and Ansible—empowering ops professionals to grow together through sharing, discussion, knowledge consolidation, and continuous improvement.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.