Cloud Native 10 min read

How JuiceFS CSI Transforms Kubernetes Storage with MountPod Mode

This article explains how JuiceFS integrates with Kubernetes via the CSI interface, covering its three deployment modes, the detailed Mount‑Pod workflow, step‑by‑step Helm deployment, configuration, verification, and why this cloud‑native storage solution outperforms traditional block storage for modern applications.

Linux Ops Smart Journey
Linux Ops Smart Journey
Linux Ops Smart Journey
How JuiceFS CSI Transforms Kubernetes Storage with MountPod Mode

Introduction

In the cloud‑native era, Kubernetes is the brain for scheduling container workloads, while storage is the blood that sustains the system. Traditional block and local storage struggle with growing data volumes, complex micro‑service architectures, and concurrent multi‑node reads/writes.

JuiceFS CSI Overview

JuiceFS provides a cloud‑native file system that integrates with Kubernetes through the CSI interface. It offers three deployment modes: mount mode, sidecar mode, and process mode.

Tip: Process mode is used in versions ≤ v0.10; sidecar mode is suited for serverless Kubernetes environments; mount mode is the default from v0.10 onward for standard clusters.

Mount‑Pod Mode Components

The Mount‑Pod mode consists of a CSI Controller Service and a CSI Node Service.

Controller Service: Creates a sub‑directory in the JuiceFS file system named after the PersistentVolume ID.

Node Service: Creates a Mount‑Pod (the JuiceFS client) and mounts the application Pod. The detailed workflow is described below.

Mount‑Pod Workflow

User creates an application Pod that references a JuiceFS PVC.

The CSI Node creates a Mount‑Pod on the same node as the application Pod.

The Mount‑Pod starts the JuiceFS client and exposes the mount point on the host.

After the Mount‑Pod is ready, the CSI Node binds the corresponding JuiceFS sub‑directory to the container’s

VolumeMount

path.

Kubelet finally creates the application Pod.

Deploying JuiceFS CSI

1. Pull the Helm chart

$ helm repo add juicefs https://juicedata.github.io/charts/
$ helm pull juicefs/juicefs-csi-driver --version 0.28.4
$ helm push juicefs-csi-driver-0.28.4.tgz oci://core.jiaxzeng.com/plugins

2. Distribute the chart to the Kubernetes nodes

$ sudo helm pull oci://core.jiaxzeng.com/plugins/juicefs-csi-driver --version 0.28.4 --untar --untardir /etc/kubernetes/addons/

3. Create the CSI configuration file

# mode
mountMode: mountpod

# custom images
image:
  repository: core.jiaxzeng.com/csi/juicedata/juicefs-csi-driver
  tag: "v0.28.3"
# … (other sidecar images omitted for brevity)

driverName: "csi.juicefs.com"
kubeletDir: /var/lib/kubelet

node:
  tolerations:
  - operator: Exists

dashboard:
  auth:
    enabled: true
    username: jiaxzeng
    password: clouD@0209
  ingress:
    enabled: true
    className: "nginx"
    annotations:
      cert-manager.io/cluster-issuer: ca-cluster-issuer
    hosts:
    - host: juicefs.jiaxzeng.com
      paths:
      - path: /
        pathType: ImplementationSpecific
      tls:
      - secretName: juicefs.jiaxzeng.com-tls
        hosts:
        - juicefs.jiaxzeng.com

globalConfig:
  mountPodPatch:
  - lifecycle:
      postStart:
        exec:
          command: ["/bin/sh","-c","update-ca-certificates"]
    volumeMounts:
    - name: ca-certs
      mountPath: /usr/local/share/ca-certificates/ca.pem
    volumes:
    - name: ca-certs
      secret:
        secretName: s3-ca-cert-secret
        defaultMode: 420

4. Install the CSI driver

$ helm -n csi-juicefs upgrade juicefs --install --create-namespace -f /etc/kubernetes/addons/juicefs-csi-driver-values.yaml /etc/kubernetes/addons/juicefs-csi-driver

The release is installed and ready. Follow the guide to create a StorageClass or PV.

Verification

1. Create a Secret and StorageClass

apiVersion: v1
kind: Secret
metadata:
  name: juicefs-secret
  namespace: csi-juicefs
type: Opaque
stringData:
  name: myjfs
  access-key: gT7FHx6h4DalyTndevKw
  secret-key: xxxx
  metaurl: postgres://juicefs:[email protected]:9999/juicefs
  storage: minio
  bucket: https://s3.jiaxzeng.com/juicefs-%d
  envs: "{TZ: Asia/Shanghai}"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: juicefs-sc
provisioner: csi.juicefs.com
parameters:
  csi.storage.k8s.io/provisioner-secret-name: juicefs-secret
  csi.storage.k8s.io/provisioner-secret-namespace: csi-juicefs
  csi.storage.k8s.io/node-publish-secret-name: juicefs-secret
  csi.storage.k8s.io/node-publish-secret-namespace: csi-juicefs

2. Create a PVC and a test Pod

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: juicefs-pvc
  namespace: default
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: juicefs-sc
---
apiVersion: v1
kind: Pod
metadata:
  name: juicefs-app
  namespace: default
spec:
  containers:
  - name: app
    image: core.jiaxzeng.com/library/tools:v1.3
    volumeMounts:
    - mountPath: /data
      name: juicefs-pv
  volumes:
  - name: juicefs-pv
    persistentVolumeClaim:
      claimName: juicefs-pvc

Tip: If you set

resources.requests.storage

to “1G”, the container may see a limit of “1P”.

3. Verify capacity limits

$ kubectl exec -it juicefs-app -- bash
# df -h /data/
Filesystem      Size  Used Avail Use% Mounted on
JuiceFS:myjfs   1.0G   0   1.0G   0% /data
# dd if=/dev/zero of=/data/test bs=1M count=1025 oflag=direct
dd: ‘/data/test’: Disk quota exceeded

4. Access the JuiceFS CSI dashboard

JuiceFS CSI dashboard
JuiceFS CSI dashboard

Conclusion

As cloud‑native technologies evolve, traditional storage solutions can no longer meet the elasticity, performance, and cost demands of modern applications. The deep integration of JuiceFS with Kubernetes CSI offers a practical, high‑performance, and cost‑effective solution for cloud‑native persistent storage.

KubernetesCloud Native StorageCSIJuiceFSHelmPersistentVolumeMountPod
Linux Ops Smart Journey
Written by

Linux Ops Smart Journey

The operations journey never stops—pursuing excellence endlessly.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.