How to Integrate Kubernetes with External Storage Using CSI (Local Mode Guide)
This tutorial explains how to connect Kubernetes to external storage via the Container Storage Interface, covering local mode storage setup, configuration steps, verification, and best‑practice tips for reliable, persistent data management in cloud‑native environments.
Background
In the cloud‑native era, Kubernetes is the de‑facto standard for container orchestration, but containers are short‑lived while data must be persisted. This article explains how to integrate Kubernetes with powerful external storage to keep data safe, reliable and easy to manage.
Background
The workload relies on CPU, memory, network and storage. CPU and memory are virtualized for containers; network is provided by a CNI plugin (Calico); storage (CSI) is introduced in this article.
CSI Overview
The Container Storage Interface (CSI) is an abstraction that allows Kubernetes to work with many storage systems. Five integration methods are covered:
Local mode storage
Kubernetes with NFS
Kubernetes with RBD
Kubernetes with CephFS
Kubernetes with Ceph RGW
Local Mode Storage
Local mode uses hostPath as storage. It automatically schedules pods onto the node where the PersistentVolume resides, eliminating the need to specify a node in the Deployment.
Tip: If a node fails, pods using local storage cannot start.
Step 1 – Create a StorageClass
<code>cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
EOF</code>Step 2 – Create a PersistentVolume
<code>cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
name: test
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /data/test-local-pv
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-node01
EOF</code>Step 3 – Create a PersistentVolumeClaim
<code>cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-local-pvc
namespace: default
spec:
storageClassName: local-storage
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
EOF</code>Tip: In local mode, the PV and PVC are not bound immediately; binding occurs only when a pod consumes the PVC because the StorageClass’s
volumeBindingModemust be
WaitForFirstConsumer.
Verification
Deploy a workload and mount the volume
<code>cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: tools
spec:
replicas: 1
selector:
matchLabels:
app: tools
template:
metadata:
labels:
app: tools
spec:
containers:
- name: tools
image: registry.cn-guangzhou.aliyuncs.com/jiaxzeng6918/tools:v1.1
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: test-local-pvc
EOF</code>Check the pod’s node
<code>kubectl get pod -o wide</code>Confirm the mount
<code>kubectl exec -it tools-6f6f7cd4bf-jjlbg -- df -h /data</code>Tip: Local mode cannot enforce storage size limits.
Conclusion
Integrating Kubernetes with external storage expands the storage capabilities of containerized applications and provides flexible, efficient data management. As cloud‑native technologies evolve, more storage solutions will seamlessly integrate with Kubernetes, unlocking data’s potential to drive business innovation.
Linux Ops Smart Journey
The operations journey never stops—pursuing excellence endlessly.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.