Cloud Native 21 min read

Is Minio Turning Paid? 5 Free Distributed Storage Alternatives You Should Consider

The article explains Minio's recent licensing shift to AGPLv3, why it matters for SaaS and proprietary software vendors, and presents five open‑source distributed storage systems—SeaweedFS, Garage, Ceph, GlusterFS, and OpenStack Swift—detailing their licenses, deployment complexity, performance characteristics, and suitable use cases.

Java Companion
Java Companion
Java Companion
Is Minio Turning Paid? 5 Free Distributed Storage Alternatives You Should Consider

MinIO license changes

From October 2025 MinIO will stop providing free Docker images, remove advanced console features, and change the community edition license from Apache 2.0 to AGPLv3. The core product remains open‑source under AGPLv3, which requires any network‑served service that incorporates the code to disclose its source or obtain a commercial license.

SeaweedFS

Fully open‑source (Apache 2.0), simple architecture, excellent small‑file performance, and full S3 compatibility.

Quick 5‑minute deployment

wget https://github.com/seaweedfs/seaweedfs/releases/download/3.55/linux_amd64.tar.gz
 tar -xzf linux_amd64.tar.gz
 ./weed master -ip=localhost -port=9333
 ./weed volume -dir="./data" -max=100 -mserver="localhost:9333" -port=8080

Java client example

private AmazonS3 s3Client;
public void init() {
    AWSCredentials cred = new BasicAWSCredentials("your-access-key", "your-secret-key");
    s3Client = AmazonS3ClientBuilder.standard()
        .withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("http://localhost:8333", "us-east-1"))
        .withCredentials(new AWSStaticCredentialsProvider(cred))
        .withPathStyleAccessEnabled(true)
        .build();
}
public void uploadFile(String bucket, String key, File file) {
    if (!s3Client.doesBucketExistV2(bucket)) {
        s3Client.createBucket(bucket);
    }
    s3Client.putObject(bucket, key, file);
}

Typical scenarios: image/document storage, rapid prototyping, S3‑compatible migrations, resource‑constrained teams.

Garage

Apache 2.0‑licensed decentralized object store without a single point of failure.

Cluster deployment (docker‑compose)

version: '3.8'
services:
  garage1:
    image: dxflrs/garage:v0.9.0
    command: "garage server"
    environment:
      - GARAGE_NODE_NAME=node1
      - GARAGE_RPC_SECRET=my-secret-key
      - GARAGE_BIND_ADDR=0.0.0.0:3901
      - GARAGE_RPC_BIND_ADDR=0.0.0.0:3902
      - GARAGE_REPLICATION_MODE=3
    volumes:
      - ./data/garage1:/var/lib/garage
    ports:
      - "3901:3901"
      - "3902:3902"
  garage2:
    image: dxflrs/garage:v0.9.0
    command: "garage server"
    environment:
      - GARAGE_NODE_NAME=node2
      - GARAGE_RPC_SECRET=my-secret-key
      - GARAGE_BIND_ADDR=0.0.0.0:3901
      - GARAGE_RPC_BIND_ADDR=0.0.0.0:3902
      - GARAGE_SEED=garage1:3902
    volumes:
      - ./data/garage2:/var/lib/garage
  garage3:
    image: dxflrs/garage:v0.9.0
    command: "garage server"
    environment:
      - GARAGE_NODE_NAME=node3
      - GARAGE_RPC_SECRET=my-secret-key
      - GARAGE_BIND_ADDR=0.0.0.0:3901
      - GARAGE_RPC_BIND_ADDR=0.0.0.0:3902
      - GARAGE_SEED=garage1:3902
    volumes:
      - ./data/garage3:/var/lib/garage

Java integration

private AmazonS3 garageClient;
public void initGarageConnection() {
    AWSCredentials cred = new BasicAWSCredentials("GK...", "...");
    garageClient = AmazonS3ClientBuilder.standard()
        .withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("http://localhost:3900", "garage"))
        .withCredentials(new AWSStaticCredentialsProvider(cred))
        .withPathStyleAccessEnabled(true)
        .build();
}
public void createBucketWithPolicy(String bucket) {
    garageClient.createBucket(bucket);
    String policy = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":\"*\",\"Action\":\"s3:GetObject\",\"Resource\":\"arn:aws:s3:::%s/*\"}]}".formatted(bucket);
    garageClient.setBucketPolicy(bucket, policy);
}
public String uploadAndGenerateUrl(String bucket, String key, InputStream is) {
    ObjectMetadata meta = new ObjectMetadata();
    meta.setContentType("application/octet-stream");
    garageClient.putObject(bucket, key, is, meta);
    java.util.Date exp = new java.util.Date();
    exp.setTime(exp.getTime() + 3600_000);
    GeneratePresignedUrlRequest req = new GeneratePresignedUrlRequest(bucket, key)
        .withMethod(HttpMethod.GET)
        .withExpiration(exp);
    return garageClient.generatePresignedUrl(req).toString();
}

Typical scenarios: decentralized applications, lightweight object storage, research or education projects.

Ceph

LGPL‑licensed, community‑driven with Red Hat backing. Provides object, block, and file storage; S3 compatibility via RADOSGW.

Java client (S3 compatible)

private AmazonS3 cephClient;
public void initCephConnection() {
    AWSCredentials cred = new BasicAWSCredentials(System.getenv("CEPH_ACCESS_KEY"), System.getenv("CEPH_SECRET_KEY"));
    cephClient = AmazonS3ClientBuilder.standard()
        .withCredentials(new AWSStaticCredentialsProvider(cred))
        .withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("http://ceph-gateway.example.com:7480", ""))
        .withPathStyleAccessEnabled(true)
        .build();
}
public void uploadLargeFile(String bucket, String key, File file) throws Exception {
    InitiateMultipartUploadRequest init = new InitiateMultipartUploadRequest(bucket, key);
    InitiateMultipartUploadResult resp = cephClient.initiateMultipartUpload(init);
    String uploadId = resp.getUploadId();
    long partSize = 100L * 1024 * 1024; // 100 MiB
    List<PartETag> partETags = new ArrayList<>();
    try (FileInputStream fis = new FileInputStream(file)) {
        long fileSize = file.length();
        long position = 0;
        int partNumber = 1;
        while (position < fileSize) {
            long curSize = Math.min(partSize, fileSize - position);
            UploadPartRequest partReq = new UploadPartRequest()
                .withBucketName(bucket)
                .withKey(key)
                .withUploadId(uploadId)
                .withPartNumber(partNumber)
                .withInputStream(fis)
                .withPartSize(curSize);
            UploadPartResult partRes = cephClient.uploadPart(partReq);
            partETags.add(partRes.getPartETag());
            position += curSize;
            partNumber++;
        }
    }
    CompleteMultipartUploadRequest comp = new CompleteMultipartUploadRequest(bucket, key, uploadId, partETags);
    cephClient.completeMultipartUpload(comp);
}

Typical scenarios: large‑scale enterprise storage, workloads requiring object + block + file interfaces, teams with strong operations capability.

GlusterFS

GPLv3‑licensed horizontally scalable POSIX file system with a metadata‑free architecture.

Quick deployment script (3‑node)

#!/bin/bash
# Probe peers
gluster peer probe node2
gluster peer probe node3
# Dispersed volume (data spread across nodes)
gluster volume create gv0 disperse 3 node1:/data/brick1 node2:/data/brick1 node3:/data/brick1
# Replicated volume (full copies)
gluster volume create gv1 replica 3 node1:/data/brick2 node2:/data/brick2 node3:/data/brick2
# Start volumes
gluster volume start gv0
gluster volume start gv1
# Mount on client
mount -t glusterfs node1:/gv0 /mnt/glusterfs

Java usage (standard file API)

private Path mountPoint;
public GlusterFSExample(String mountPath) {
    this.mountPoint = Paths.get(mountPath);
    if (!Files.exists(mountPoint)) {
        throw new IllegalArgumentException("GlusterFS mount point does not exist: " + mountPath);
    }
}
public void writeFile(String name, String content) throws IOException {
    Path file = mountPoint.resolve(name);
    Files.createDirectories(file.getParent());
    Files.write(file, content.getBytes(StandardCharsets.UTF_8), StandardOpenOption.CREATE, StandardOpenOption.TRUNCATE_EXISTING);
}
public String readFile(String name) throws IOException {
    Path file = mountPoint.resolve(name);
    return Files.exists(file) ? new String(Files.readAllBytes(file), StandardCharsets.UTF_8) : null;
}
public List<String> listFiles(String dir) throws IOException {
    Path d = mountPoint.resolve(dir);
    if (Files.isDirectory(d)) {
        try (Stream<Path> s = Files.list(d)) {
            return s.filter(Files::isRegularFile)
                    .map(p -> p.getFileName().toString())
                    .collect(Collectors.toList());
        }
    }
    return Collections.emptyList();
}

Typical scenarios: applications requiring a POSIX interface, media processing pipelines, legacy systems built on file APIs.

OpenStack Swift

Apache 2.0‑licensed object storage component of OpenStack, governed by the OpenStack Foundation.

Cluster configuration (simplified)

# swift.conf – core settings
[swift-hash]
swift_hash_path_prefix = changeme
swift_hash_path_suffix = changeme

[storage-policy:0]
name = Policy-0
default = yes

# Ring creation (example)
swift-ring-builder account.builder create 10324
swift-ring-builder container.builder create 10324
swift-ring-builder object.builder create 10324

# Add a storage node
swift-ring-builder object.builder add r1z1-127.0.0.1:6010/sdb1 100
swift-ring-builder object.builder rebalance

Java client (jclouds)

private BlobStore blobStore;
public void initSwiftConnection() {
    Properties overrides = new Properties();
    overrides.setProperty("jclouds.swift.auth.version", "3");
    SwiftApi swift = ContextBuilder.newBuilder("openstack-swift")
        .endpoint("http://swift.example.com:5000/v3")
        .credentials("project:username", "password")
        .overrides(overrides)
        .buildApi(SwiftApi.class);
    blobStore = swift.getBlobStore("RegionOne");
}
public String uploadToSwift(String container, String object, InputStream data, long size) {
    if (!blobStore.containerExists(container)) {
        blobStore.createContainerInLocation(null, container);
    }
    Blob blob = blobStore.blobBuilder(object)
        .payload(data)
        .contentLength(size)
        .contentType("application/octet-stream")
        .build();
    String etag = blobStore.putBlob(container, blob);
    long expires = System.currentTimeMillis() / 1000 + 3600; // 1 h
    return String.format("http://swift.example.com:8080/v1/AUTH_%s/%s/%s?temp_url_sig=xxx&temp_url_expires=%d", "account", container, object, expires);
}
public void uploadLargeObject(String container, String object, List<File> segments) {
    List<String> segPaths = new ArrayList<>();
    for (int i = 0; i < segments.size(); i++) {
        String segName = String.format("%s/%08d", object, i);
        try (InputStream is = new FileInputStream(segments.get(i))) {
            uploadToSwift(container, segName, is, segments.get(i).length());
            segPaths.add(String.format("/%s/%s", container, segName));
        } catch (IOException e) {
            throw new RuntimeException("Segment upload failed", e);
        }
    }
    String manifest = String.join("
", segPaths);
    try (InputStream is = new ByteArrayInputStream(manifest.getBytes())) {
        blobStore.putBlob(container, blobStore.blobBuilder(object)
            .payload(is)
            .contentLength(manifest.length())
            .contentType("text/plain")
            .build());
    } catch (IOException e) {
        throw new RuntimeException("Manifest creation failed", e);
    }
}

Typical scenarios: OpenStack cloud deployments, enterprise applications demanding high durability, multi‑region replication, teams already operating OpenStack infrastructure.

Comparative overview

License : SeaweedFS & Garage – Apache 2.0; Ceph – LGPL; GlusterFS – GPLv3; OpenStack Swift – Apache 2.0.

Deployment complexity : SeaweedFS – very simple; Garage – simple; Ceph – complex; GlusterFS – moderate; Swift – moderately complex.

S3 compatibility : SeaweedFS, Garage – full; Ceph – via RADOSGW; GlusterFS – third‑party; Swift – native.

File‑system support : SeaweedFS – limited; Garage – none; Ceph – CephFS; GlusterFS – POSIX; Swift – none.

Scale suitability : SeaweedFS & Garage – small‑to‑mid; Ceph – very large; GlusterFS – mid‑to‑large; Swift – large.

Small‑file performance : SeaweedFS ★★★★★; Garage ★★★☆☆; Ceph ★★★☆☆; GlusterFS ★★☆☆☆; Swift ★★★★☆.

Large‑file performance : SeaweedFS ★★★☆☆; Garage ★★★★☆; Ceph ★★★★★; GlusterFS ★★★★☆; Swift ★★★★★.

Ops overhead : SeaweedFS & Garage – low; Ceph – high; GlusterFS – medium; Swift – medium‑high.

open-sourceMinIOdistributed storageCephGlusterFSSeaweedFSOpenStack Swift
Java Companion
Written by

Java Companion

A highly professional Java public account

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.