Operations 5 min read

Mastering JuiceFS CSI Monitoring: From Metrics Collection to Grafana Dashboards

This guide walks ops engineers through setting up comprehensive monitoring for JuiceFS CSI in Kubernetes, covering metrics extraction via the mount pod, creating a PodMonitor for Prometheus, and visualizing data with Grafana dashboards to enable proactive issue detection and rapid response.

Linux Ops Smart Journey
Linux Ops Smart Journey
Linux Ops Smart Journey
Mastering JuiceFS CSI Monitoring: From Metrics Collection to Grafana Dashboards

In the cloud‑native world, storage is the foundation for stable application operation. As containerization spreads, managing distributed file systems efficiently becomes a key challenge, and JuiceFS CSI emerges as a high‑performance cloud‑native file system solution.

This article, from an operations perspective, explains how to build a complete monitoring system for JuiceFS CSI, enabling proactive issue detection and rapid response.

1. Retrieve metrics data

JuiceFS CSI exposes monitoring metrics on the mount pod at port

9567

under the name

metrics

.

kubectl -n csi-juicefs get pod -l app.kubernetes.io/name=juicefs-mount -owide
NAME                                                                 READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
juicefs-k8s-node02-pvc-f6dd0a27-07f2-4a97-9ca8-834dd9937fe2-pyzuqc   1/1     Running   0          35m   10.244.58.242    k8s-node02   <none>           <none>
...
curl -s 10.244.58.242:9567/metrics | head
# HELP juicefs_blockcache_blocks number of cached blocks
# TYPE juicefs_blockcache_blocks gauge
juicefs_blockcache_blocks{instance="pvc-f6dd0a27-07f2-4a97-9ca8-834dd9937fe2",juicefs_version="1.2.0+2024-06-18.873c47b9",mp="/jfs/pvc-f6dd0a27-07f2-4a97-9ca8-834dd9937fe2-pyzuqc",vol_name="myjfs"} 3
# HELP juicefs_blockcache_bytes number of cached bytes
# TYPE juicefs_blockcache_bytes gauge
juicefs_blockcache_bytes{instance="pvc-f6dd0a27-07f2-4a97-9ca8-834dd9937fe2",juicefs_version="1.2.0+2024-06-18.873c47b9",mp="/jfs/pvc-f6dd0a27-07f2-4a97-9ca8-834dd9937fe2-pyzuqc",vol_name="myjfs"} 1.063111e+06
...

Tip: The mount pod is created only when a business pod mounts it; creating a PVC alone does not generate metrics.

2. Prometheus collection with PodMonitor

Using a

PodMonitor

makes it easy to scrape the metrics.

cat <<'EOF' | kubectl apply -f -
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  name: juicefs-mounts-monitor
  namespace: csi-juicefs
  labels:
    release: monitor
spec:
  namespaceSelector:
    matchNames:
    - csi-juicefs
  selector:
    matchLabels:
      app.kubernetes.io/name: juicefs-mount
  podMetricsEndpoints:
  - port: metrics
    path: '/metrics'
    scheme: 'http'
    interval: '5s'
EOF

After applying, verify in Prometheus that the target appears.

3. Add monitoring dashboard

Import the official JSON dashboard into Grafana to visualize JuiceFS metrics.

Conclusion

In the cloud‑native era, storage is no longer a mere backend component; it directly impacts business continuity and user experience. The powerful capabilities of JuiceFS CSI are fully realized only with continuous monitoring and timely feedback. By following this guide, you can establish a complete monitoring system to protect your cloud‑native storage workloads.

MonitoringCloud NativeOperationsKubernetesPrometheusCSIGrafanaJuiceFS
Linux Ops Smart Journey
Written by

Linux Ops Smart Journey

The operations journey never stops—pursuing excellence endlessly.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.