Deploy Longhorn on Kubernetes with Helm: Step‑by‑Step Guide
This article provides a comprehensive, hands‑on tutorial for deploying the open‑source Longhorn distributed block storage system on a Kubernetes cluster using Helm, covering prerequisites, Helm chart preparation, installation, validation, and PVC mounting to ensure reliable stateful workloads.
As cloud‑native technologies evolve, more enterprises run stateful services such as databases and message queues on Kubernetes, demanding higher data durability and performance. Traditional storage methods like NFS or HostPath no longer suffice.
What is Longhorn?
Longhorn is a lightweight, reliable, and easy‑to‑use distributed block storage system built specifically for Kubernetes. It is free, open‑source, originally created by Rancher Labs and now an incubating project of the Cloud Native Computing Foundation (CNCF).
Key capabilities
Provides persistent storage for distributed stateful applications in a Kubernetes cluster.
Enables block storage partitioning into Longhorn volumes, usable with any Kubernetes volume.
Replicates block storage across multiple nodes and data centers for high availability.
Supports backup to external storage such as NFS or AWS S3.
Offers cross‑cluster disaster‑recovery volumes.
Allows scheduled snapshots and regular backups to NFS or S3‑compatible secondary storage.
Facilitates volume restoration from backups.
Enables Longhorn upgrades without interrupting persistent volumes.
Pre‑deployment checks
Before installing Longhorn, ensure the following:
A container runtime compatible with Kubernetes (Docker v1.13+, containerd v1.3.7+, etc.).
Kubernetes version >= 1.25.
open‑iscsi installed and the
iscsiddaemon running on all nodes.
RWX support: NFSv4 client installed on each node.
Host file system supports ext4 or xfs.
Utilities such as bash, curl, findmnt, grep, awk, blkid, lsblk installed.
Cryptsetup and LUKS installed.
<code>$ sudo yum --setopt=tsflags=noscripts install iscsi-initiator-utils -y
$ echo "InitiatorName=$( /sbin/iscsi-iname )" | sudo tee /etc/iscsi/initiatorname.iscsi
$ sudo systemctl enable iscsid --now
$ sudo modprobe iscsi_tcp
$ echo iscsi_tcp | sudo tee /etc/modules-load.d/longhorn.conf
$ sudo yum install nfs-utils -y
$ sudo yum install cryptsetup -y
$ sudo modprobe dm-crypt
$ echo dm-crypt | sudo tee -a /etc/modules-load.d/longhorn.conf</code>Deploy Longhorn with Helm
Add the Longhorn chart repository, pull the chart, and push it to a private Harbor registry:
<code>$ helm repo add longhorn https://charts.longhorn.io
$ helm pull longhorn/longhorn --version 1.9.0
$ helm push longhorn-1.9.0.tgz oci://core.jiaxzeng.com/plugins
$ sudo helm pull oci://core.jiaxzeng.com/plugins/longhorn --version 1.9.0 --untar --untardir /etc/kubernetes/addons/
$ kubectl label nodes k8s-node01 kubernetes.io/storage=longhorn
$ kubectl label nodes k8s-node02 kubernetes.io/storage=longhorn
$ kubectl label nodes k8s-node03 kubernetes.io/storage=longhorn
$ kubectl label nodes k8s-node04 kubernetes.io/storage=longhorn</code>Create a
longhorn-values.yamlwith the following relevant sections (excerpt):
<code># longhorn data nodes
global:
nodeSelector:
kubernetes.io/storage: longhorn
# image repositories (using internal Harbor)
image:
longhorn:
repository: core.jiaxzeng.com/library/longhornio/longhorn-engine
manager:
repository: core.jiaxzeng.com/library/longhornio/longhorn-manager
ui:
repository: core.jiaxzeng.com/library/longhornio/longhorn-ui
instanceManager:
repository: core.jiaxzeng.com/library/longhornio/longhorn-instance-manager
shareManager:
repository: core.jiaxzeng.com/library/longhornio/longhorn-share-manager
csi:
attacher:
repository: core.jiaxzeng.com/library/longhornio/csi-attacher
provisioner:
repository: core.jiaxzeng.com/library/longhornio/csi-provisioner
nodeDriverRegistrar:
repository: core.jiaxzeng.com/library/longhornio/csi-node-driver-registrar
resizer:
repository: core.jiaxzeng.com/library/longhornio/csi-resizer
snapshotter:
repository: core.jiaxzeng.com/library/longhornio/csi-snapshotter
livenessProbe:
repository: core.jiaxzeng.com/library/longhornio/livenessprobe
# data path and replica count
defaultSettings:
defaultDataPath: /longhorn
persistence:
defaultClassReplicaCount: 2</code>Install Longhorn with Helm:
<code>$ helm install -n storage-system --create-namespace longhorn -f /etc/kubernetes/addons/longhorn-values.yaml /etc/kubernetes/addons/longhorn
# Output indicates successful deployment</code>Validate Longhorn installation
Download the Longhorn CLI and run a pre‑flight check:
<code>$ curl -SfL -o longhornctl https://github.com/longhorn/cli/releases/download/v1.9.0/longhornctl-linux-amd64
$ chmod +x longhornctl
$ ./longhornctl check preflight --kube-config ~/.kube/config --image core.jiaxzeng.com/library/longhornio/longhorn-cli:v1.9.0
# Logs show DNS, iscsid, NFS4, required packages, and modules are present</code>Mount PVC validation
Create a PersistentVolumeClaim using the Longhorn storage class:
<code>cat <<'EOF' | kubectl apply -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc-longhorn1
spec:
storageClassName: longhorn
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
EOF</code>Create a Deployment that mounts the PVC:
<code>cat <<'EOF' | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-longhorn
name: test-longhorn
spec:
replicas: 1
selector:
matchLabels:
app: test-longhorn
template:
metadata:
labels:
app: test-longhorn
spec:
containers:
- image: core.jiaxzeng.com/library/tools:v1.3
name: tools
volumeMounts:
- name: data
mountPath: /app
volumes:
- name: data
persistentVolumeClaim:
claimName: test-pvc-longhorn1
EOF</code>Verify the pod sees the mounted volume:
<code>$ kubectl exec -it deploy/test-longhorn -- df -h /app
Filesystem Size Used Avail Use% Mounted on
10.106.17.8:/pvc-... 2.0G 0 1.9G 0% /app</code>Longhorn UI
Conclusion
In today’s rich Kubernetes ecosystem, selecting an appropriate persistent storage solution is crucial. Longhorn’s lightweight, flexible, and feature‑rich design makes it a top choice for many operators and developers.
Linux Ops Smart Journey
The operations journey never stops—pursuing excellence endlessly.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.