How JuiceFS Transforms Edge Rendering Storage: A Cloud‑Native Success Story
This article details Volcano Engine's edge cloud storage challenges, evaluates multiple storage solutions, and explains how JuiceFS was adopted to simplify workflows, boost read/write performance, and provide a cloud‑native, POSIX‑compatible system for large‑scale edge rendering workloads.
Edge Cloud Overview
Volcano Engine Edge Cloud combines cloud‑computing fundamentals with heterogeneous edge compute and networking to deliver a next‑generation distributed cloud solution focused on compute, network, storage, security, and intelligence at edge locations.
Edge Storage Challenges
Typical edge rendering workloads require unified object storage and file system metadata, high read throughput, and full support for both S3 and POSIX interfaces.
Unified object storage and POSIX metadata access.
High read‑throughput performance.
Complete S3 and POSIX interface support.
Initial Solutions and Limitations
After six months of testing, internal storage components met sustainability and performance needs but faced two edge‑specific issues:
Designed for centralized data centers, making it hard to satisfy physical resource constraints at edge sites.
All storage components (object, block, distributed, file) were bundled together, whereas edge scenarios primarily need file and object storage, requiring trimming and adaptation.
Three candidate architectures were explored:
CephFS + MinIO gateway – MinIO provides object storage, CephFS stores final data. Performance degraded when files reached tens of millions.
Ceph RGW + S3FS – Met most requirements but write/modify performance was insufficient.
Core Storage Requirements for Edge Rendering
Simple operations : Easy onboarding via documentation and straightforward scaling/fault handling.
Data reliability : No data loss or inconsistency for user‑uploaded content.
Unified metadata : Single metadata layer supporting both object and file storage to reduce complexity.
Read‑optimized performance : High read throughput for read‑heavy, write‑light workloads.
Active community : Responsive community for issue resolution and feature iteration.
JuiceFS Evaluation
In September 2021, the team discovered JuiceFS and decided to test it in the edge cloud scenario. JuiceFS offers rich documentation, S3 API compatibility, full POSIX support, and CSI driver integration.
Two test environments were built:
Redis + Ceph (single node) deployment.
MySQL + Ceph (single instance) deployment.
Both setups leveraged mature components (Redis, MySQL, Ceph via Rook) and JuiceFS client integration, resulting in smooth deployment.
JuiceFS satisfied business requirements and proved ready for production.
Benefits of Using JuiceFS
Workflow simplification : Users upload via JuiceFS S3 gateway; the filesystem is mounted directly to rendering pods, allowing POSIX reads/writes and eliminating multiple data transfer steps.
Read acceleration : Client‑side caching speeds up frequent reads, delivering 3–5× higher throughput.
Write acceleration : Data is buffered in memory and flushed in large chunks (default 64 MiB), greatly improving large‑file write performance.
Deploying JuiceFS at the Edge
JuiceFS runs as a DaemonSet on Kubernetes, mounting the filesystem to rendering pods via HostPath. If a mount fails, the DaemonSet automatically recovers. LDAP authenticates cluster nodes for access control.
Future plans include switching from HostPath to the CSI driver for elastic scaling.
Metadata Engine Choice
JuiceFS supports various metadata backends (MySQL, Redis). The production environment uses MySQL due to its reliability, transaction support, and operational simplicity. Both single‑instance and multi‑instance (primary‑secondary) deployments are used, with high‑performance cloud disks from Ceph as storage.
MySQL Configuration
Container resources: 8 CPU, 24 GiB RAM, 100 GiB disk (Ceph RBD). Image: mysql:5.7. Sample my.cnf:
ignore-db-dir=lost+found
max-connections=4000
innodb-buffer-pool-size=12884901888Ceph Object Storage
The Ceph cluster (Octopus) is deployed via Rook, providing high‑performance cloud disks. Hardware includes 128 CPU cores, 512 GiB RAM, 2 TiB NVMe system SSDs, and 8 TiB NVMe data SSDs. Software stack: Debian 9, modified kernel PID limits, BlueStore backend, three‑replica configuration, and disabled PG auto‑adjustment.
JuiceFS Client Compilation
To enable direct Ceph RADOS access, the client must be compiled with librados matching the Ceph version. Using Go 1.19, the build command is: make juicefs.ceph After compilation, the filesystem can be created and mounted on compute nodes.
Future Outlook
More cloud‑native : Transition from HostPath to CSI driver for elastic scaling.
Metadata engine upgrade : Introduce a gRPC metadata service, potentially migrating to TiKV for better horizontal scalability.
Feature and bug improvements : Continue adding functionality, fixing issues, and contributing upstream.
Volcano Engine Developer Services
The Volcano Engine Developer Community, Volcano Engine's TOD community, connects the platform with developers, offering cutting-edge tech content and diverse events, nurturing a vibrant developer culture, and co-building an open-source ecosystem.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
