Cloud Native 14 min read

Mount S3 as a Filesystem in Kubernetes with s3fs-fuse and DaemonSet

This article explains how to use FUSE‑based s3fs to mount an Amazon S3 bucket as a regular filesystem inside Kubernetes pods via a DaemonSet, covering background, FUSE principles, implementation steps, Docker image creation, ConfigMap and DaemonSet configuration, and performance trade‑offs.

Huolala Tech
Huolala Tech
Huolala Tech
Mount S3 as a Filesystem in Kubernetes with s3fs-fuse and DaemonSet

Background

The feature platform project needed a persistent file system for Zeppelin. Domestic deployments used NFS, but overseas clusters only had S3 object storage, making NFS impractical. Direct SDK/CLI access to S3 is cumbersome and breaks Zeppelin's git‑based version management, so a file‑system‑like solution was required.

FUSE and S3FS Introduction

FUSE (Filesystem in Userspace) lets developers create file systems without kernel changes, improving productivity. S3FS is a FUSE‑based file system that mounts S3 buckets as local directories, supporting POSIX‑like operations and working on Linux and macOS.

FUSE Basic Principle

FUSE consists of a kernel module that intercepts file operations and forwards them to a userspace library (libfuse). The userspace program registers callbacks for operations such as read, write, and directory traversal, and the kernel module invokes these callbacks on demand.

S3FS Implementation Scheme

Installing S3FS on every Kubernetes node and manually mounting the bucket is error‑prone and costly. Instead, a Docker image containing s3fs and its dependencies is built, a ConfigMap supplies credentials, and a DaemonSet runs a pod on each node that mounts the bucket via the host’s mount namespace.

S3FS Practical Steps

Overall Process

Create a Docker image that installs s3fs‑fuse and includes a startup script.

Define a ConfigMap with S3 bucket name and credentials.

Create a DaemonSet that runs the image on every node, mounts the host’s /dev/fuse, and executes the startup script to mount the bucket.

Step 1: Build Image

FROM alpine:latest
ENV MNT_POINT /var/s3
ENV IAM_ROLE=none
ENV S3_REGION ''
VOLUME /var/s3
ARG S3FS_VERSION=v1.89
RUN apk --update add bash fuse libcurl libxml2 libstdc++ libgcc alpine-sdk automake autoconf libxml2-dev fuse-dev curl-dev git \
    && git clone https://github.com/s3fs-fuse/s3fs-fuse.git \
    && cd s3fs-fuse \
    && git checkout tags/${S3FS_VERSION} \
    && ./autogen.sh && ./configure --prefix=/usr && make && make install && make clean \
    && rm -rf /var/cache/apk/* && apk del git automake autoconf
RUN sed -i s/"#user_allow_other"/"user_allow_other"/g /etc/fuse.conf
COPY docker-entrypoint.sh /
CMD [/docker-entrypoint.sh]

Startup Script (docker-entrypoint.sh)

#!/bin/bash
set -euo pipefail
export S3_ACL=${S3_ACL:-private}
mkdir -p ${MNT_POINT}
export AWSACCESSKEYID=${AWSACCESSKEYID:-$AWS_KEY}
export AWSSECRETACCESSKEY=${AWSSECRETACCESSKEY:-$AWS_SECRET_KEY}
echo "${AWS_KEY}:${AWS_SECRET_KEY}" > /etc/passwd-s3fs
chmod 0400 /etc/passwd-s3fs
/usr/bin/s3fs ${S3_BUCKET} ${MNT_POINT} -d -d -f -o endpoint=${S3_REGION},allow_other,retries=5

Step 2: Create ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:
  name: s3-config
data:
  S3_BUCKET: <YOUR-S3-BUCKET-NAME>
  AWS_KEY: <YOUR-AWS-ACCESS-KEY>
  AWS_SECRET_KEY: <YOUR-AWS-SECRET-KEY>

Step 3: Create DaemonSet

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: s3-provider
  labels:
    app: s3-provider
spec:
  selector:
    matchLabels:
      app: s3-provider
  template:
    metadata:
      labels:
        app: s3-provider
    spec:
      containers:
      - name: s3fuse
        image: freegroup/kube-s3:1.13
        securityContext:
          privileged: true
          capabilities:
            add:
            - SYS_ADMIN
        envFrom:
        - configMapRef:
            name: s3-config
        volumeMounts:
        - name: devfuse
          mountPath: /dev/fuse
        - name: mntdatas3fs
          mountPath: /var/s3
      volumes:
      - name: devfuse
        hostPath:
          path: /dev/fuse
      - name: mntdatas3fs
        hostPath:
          path: /mnt/data-s3-fs

Step 4: Deploy

Run kubectl apply -f configmap.yaml and kubectl apply -f daemonset.yaml. Verify that a pod runs on each node.

Step 5: Verify Mount

Check the host directory /mnt/data-s3-fs for bucket contents, then exec into a pod to confirm the mount:

kubectl exec -it s3-provider-xxxxx -- /bin/bash
ls /var/s3

Pods can now use the hostPath volume to access S3 data as a regular filesystem, optionally using subPath for isolation.

S3FS Drawbacks

Frequent user‑kernel context switches and data copies increase latency and reduce throughput compared to native filesystems.

Random writes require rewriting the entire object.

Metadata operations (e.g., directory listing) suffer from network latency.

Eventual consistency may expose intermediate data.

No atomic rename, hard links, or coordinated multi‑client access.

Overall, S3FS provides convenient file‑system semantics at the cost of performance and some POSIX features.

Conclusion

Using s3fs‑fuse with a Kubernetes DaemonSet turns object storage into a user‑space filesystem, dramatically reducing manual installation and mount steps across nodes and simplifying scaling. Although not officially recommended and incurring performance overhead, it satisfies scenarios where file‑directory access and version control are required without altering the underlying storage implementation.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Kubernetescontainercloud storageFUSEDaemonSets3fs
Huolala Tech
Written by

Huolala Tech

Technology reshapes logistics

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.