Cloud Native 24 min read

Deep Dive into SeaweedFS 3.46: Source Code Walkthrough of Its Distributed Architecture

This article walks through SeaweedFS 3.46’s source code, showing how to launch Master, Volume, and Filer servers, how Raft leader election is wired, and the exact request‑response flow for uploading and downloading files, complete with command‑line examples and code snippets.

Linux Kernel Journey
Linux Kernel Journey
Linux Kernel Journey
Deep Dive into SeaweedFS 3.46: Source Code Walkthrough of Its Distributed Architecture

Overview

SeaweedFS 3.46 is a distributed file system composed of three core components: Master Server, Volume Server, and Filer Server. The source code shows how each component is started, how Raft‑based leader election is performed, and how file upload and download flow through the system.

Starting the Master Server

Launch command: weed master -ip=127.0.0.1 -ip.bind=0.0.0.0 Default HTTP port is 9333; the gRPC port is 9333 + 10000 = 19333. Flags are defined in weed/command/master.go. A three‑node cluster is started by providing a comma‑separated -peers list:

weed master -ip=127.0.0.1 -ip.bind=0.0.0.0 -port=9333 -peers="127.0.0.1:9333,127.0.0.1:9334,127.0.0.1:9335"
weed master -ip=127.0.0.1 -ip.bind=0.0.0.0 -port=9334 -peers="127.0.0.1:9333,127.0.0.1:9334,127.0.0.1:9335"
weed master -ip=127.0.0.1 -ip.bind=0.0.0.0 -port=9335 -peers="127.0.0.1:9333,127.0.0.1:9334,127.0.0.1:9335"

During start‑up the Master creates a weed_server.RaftServer via weed_server.NewRaftServer(), which uses the github.com/seaweedfs/raft library (v1.1.0) to build a Raft node and begin leader election. After a leader is elected the server registers HTTP handlers for cluster status:

r.HandleFunc("/cluster/status", raftServer.StatusHandler).Methods("GET")
 r.HandleFunc("/cluster/healthz", raftServer.HealthzHandler).Methods("GET", "HEAD")
 if *masterOption.raftHashicorp {
   r.HandleFunc("/raft/stats", raftServer.StatsRaftHandler).Methods("GET")
 }

Example queries:

$ curl http://127.0.0.1:9333/cluster/status
{"IsLeader":true,"Leader":"127.0.0.1:9333","Peers":["127.0.0.1:9335","127.0.0.1:9334"]}
$ curl http://127.0.0.1:9334/cluster/status
{"Leader":"127.0.0.1:9333","Peers":["127.0.0.1:9335","127.0.0.1:9333"]}

Starting the Volume Server

Launch command:

weed volume -mserver="127.0.0.1:9333" -dir=data -ip=127.0.0.1 -ip.bind=0.0.0.0

Flags are defined in weed/command/volume.go. HTTP listens on 8080, gRPC on 18080. The -mserver flag points to one or more Master addresses; -dir selects the data directory.

After start‑up the Volume creates a gRPC client to the Master and runs a heartbeat loop implemented in weed/server/volume_grpc_client_to_master.go (method VolumeServer.doHeartbeat). The loop sends a bidirectional SendHeartbeat stream:

rpc SendHeartbeat (stream Heartbeat) returns (stream HeartbeatResponse) {}

Inside the goroutine the server reads incoming Heartbeat messages, updates its volume‑size limit, switches master address if a new leader is reported, and reports newly created or deleted volumes/EC shards via stream.Send(). The select block listens on four internal channels ( NewVolumesChan, NewEcShardsChan, DeletedVolumesChan, DeletedEcShardsChan) and periodically sends a full heartbeat.

Starting the Filer Server

Launch command:

weed filer -s3 -master="127.0.0.1:9333" -ip=127.0.0.1 -ip.bind=0.0.0.0

Flags are defined in weed/command/filer.go. HTTP defaults to 8888, gRPC to 18888. The -s3 flag enables an S3‑compatible gateway on port 8333. Metadata is stored in a pluggable database – default leveldb, replaceable with SQLite, MySQL, Etcd, etc. (see the Filer‑Stores wiki).

The Filer acts as a file manager that forwards API calls to the underlying Volume Servers and also exposes POSIX, WebDAV, and S3 interfaces.

File Upload Process

Uploading via the Filer API is a simple HTTP POST:

$ curl -F "[email protected]" -X POST "http://127.0.0.1:8888"
{"name":"test.txt","size":14}

The handler is PostHandler in weed/server/filer_server_handlers_write.go. It distinguishes a directory‑creation request (empty Content‑Type and URL ending with /) from a file‑upload request, delegating the latter to fs.autoChunk. autoChunk decides between fs.doPostAutoChunk (POST) and fs.doPutAutoChunk (PUT). The core upload logic resides in fs.uploadReaderToChunks, which:

reads the request body with a limited reader based on chunkSize,

reuses a buffered‑object pool to limit memory usage,

spawns a goroutine per chunk that converts the raw bytes to a SeaweedFS FileChunk via fs.dataToChunk,

collects all chunks, sorts them by Offset, and returns them.

Each chunk is uploaded to a Volume Server using a URL assigned by the Master. Assignment occurs in fs.assignNewFileInfo, which calls the Master’s Assign RPC and receives a fid and an upload URL such as http://127.0.0.1:8080/14,1f343c431d. The fid 14,1f343c431d identifies the target volume (id 14) and the file key.

The file is split according to the maxMB flag (default 4 MiB). A 100 MiB file is broken into 25 chunks, each receiving its own fid.

File Download Process

Downloading mirrors the upload flow. A GET request to the Filer returns the file content; adding ?metadata=true returns the stored metadata JSON.

$ curl http://127.0.0.1:8888/test.txt
hello test.txt

$ curl http://127.0.0.1:8888/test.txt?metadata=true
{..."chunks":[{"file_id":"14,1f343c431d","size":14,...}]}

The handler GetOrHeadHandler (in weed/server/filer_server_handlers_read.go) checks ETag, handles range requests, and finally calls filer.StreamContentWithThrottler. This function:

converts each chunk’s fid to a list of Volume Server URLs via the Master’s LookupFileId RPC (with exponential back‑off),

streams the chunk data from the selected URL(s) using retriedStreamFetchChunkData,

writes the data to the HTTP response while respecting a configurable download‑rate limit.

If the requested range exceeds the available data, zero‑filled bytes are written to preserve the expected length.

References

SeaweedFS 3.46 source tree – https://github.com/seaweedfs/seaweedfs/tree/3.46 SeaweedFS Raft library v1.1.0 – https://github.com/seaweedfs/raft/tree/v1.1.0 Filer‑Stores wiki –

https://github.com/seaweedfs/seaweedfs/wiki/Filer-Stores
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

GoFile UploadFile DownloadDistributed File SystemRaftSeaweedFSFiler ServerVolume Server
Linux Kernel Journey
Written by

Linux Kernel Journey

Linux Kernel Journey

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.