Cloud Computing 11 min read

Unlock RustFS: High‑Performance Distributed Storage with Docker & SpringBoot

This guide introduces RustFS, a high‑performance Rust‑based distributed object storage system, covering its key features, Docker installation, console usage, and step‑by‑step integration with SpringBoot for file upload and deletion, including code snippets and configuration details.

macrozheng
macrozheng
macrozheng
Unlock RustFS: High‑Performance Distributed Storage with Docker & SpringBoot

Introduction

RustFS is a high‑performance distributed object storage software written in Rust. It quickly gained over 6k+star on GitHub, supports simple usage, AWS S3 compatibility, and is released under the Apache 2.0 license.

Key Features

High performance: built with Rust for fast response.

Distributed architecture: scalable and fault‑tolerant design.

AWS S3 compatibility: can be managed with the AWS S3 SDK.

Data lake support: optimized for big data and AI workloads.

Open source: Apache 2.0 license encourages community contributions.

User‑friendly: visual management console.

Installation

Using Docker to install RustFS is very convenient.

Pull the RustFS Docker image: docker pull rustfs/rustfs Run the container (replace the volume paths as needed):

docker run -p 9000:9000 --name rustfs \
  -e RUSTFS_ACCESS_KEY=rustfsadmin \
  -e RUSTFS_SECRET_KEY=rustfsadmin \
  -v /mydata/rustfs/data:/data \
  -v /etc/localtime:/etc/localtime \
  -d rustfs/rustfs

After the container starts, access the management console at http://<span>YOUR_HOST</span>:9000 with the default credentials rustfsadmin:rustfsadmin.

Console Usage

The RustFS console is intuitive and user‑friendly.

Select the File Browser feature and click the Create Bucket button to create a bucket.

Click the Configure button of a bucket to modify its access policy.

Inside a bucket, click Upload File to upload one or multiple files.

Select a file and click Preview to view it.

Use the Access Key feature to create temporary keys with custom policies.

Use the User feature to manage users and assign policies.

Use the Performance feature to view server statistics.

SpringBoot Integration

Below we create a SpringBoot application to upload and delete files on RustFS.

Add the AWS SDK dependency to pom.xml:

<!--AWS 33 Java SDK related dependency-->
<dependency>
  <groupId>software.amazon.awssdk</groupId>
  <artifactId>s3</artifactId>
  <version>${aws-s3-sdk.version}</version>
</dependency>

Configure connection information in application.yml:

rustfs:
  endpoint: http://192.168.3.101:9000
  bucketName: simple
  accessKey: rustfsadmin
  secretKey: rustfsadmin

Create a configuration class to build the S3 client:

@Configuration
public class RustFSConfig {
    @Value("${rustfs.endpoint}")
    private String ENDPOINT;
    @Value("${rustfs.accessKey}")
    private String ACCESS_KEY;
    @Value("${rustfs.secretKey}")
    private String SECRET_KEY;

    @Bean
    public S3Client s3Client() {
        return S3Client.builder()
                .endpointOverride(URI.create(ENDPOINT)) // RustFS address
                .region(Region.US_EAST_1) // fixed, RustFS does not validate region
                .credentialsProvider(StaticCredentialsProvider.create(AwsBasicCredentials.create(ACCESS_KEY, SECRET_KEY)))
                .forcePathStyle(true) // important for RustFS
                .build();
    }
}

Implement a controller for upload and delete operations:

@Slf4j
@Controller
@Tag(name = "RustFSController", description = "RustFS object storage management")
@RequestMapping("/rustfs")
public class RustFSController {
    @Autowired
    private S3Client s3Client;
    @Value("${rustfs.bucketName}")
    private String BUCKET_NAME;
    @Value("${rustfs.endpoint}")
    private String ENDPOINT;

    @Operation(summary = "File upload")
    @PostMapping(value = "/upload", consumes = MediaType.MULTIPART_FORM_DATA_VALUE)
    @ResponseBody
    public CommonResult upload(@RequestPart("file") MultipartFile file) {
        if (!bucketExists(BUCKET_NAME)) {
            s3Client.createBucket(CreateBucketRequest.builder().bucket(BUCKET_NAME).build());
            String policy = JSONUtil.toJsonStr(createBucketPolicyConfigDto(BUCKET_NAME));
            PutBucketPolicyRequest policyReq = PutBucketPolicyRequest.builder()
                    .bucket(BUCKET_NAME)
                    .policy(policy)
                    .build();
            s3Client.putBucketPolicy(policyReq);
        }
        try {
            s3Client.putObject(PutObjectRequest.builder()
                    .bucket(BUCKET_NAME)
                    .key(file.getOriginalFilename())
                    .contentType(file.getContentType())
                    .build(), RequestBody.fromInputStream(file.getInputStream(), file.getSize()));
            RustFSUploadResult result = new RustFSUploadResult();
            result.setName(file.getOriginalFilename());
            result.setUrl(ENDPOINT + "/" + BUCKET_NAME + "/" + file.getOriginalFilename());
            return CommonResult.success(result);
        } catch (IOException e) {
            e.printStackTrace();
        }
        return CommonResult.failed();
    }

    @Operation(summary = "File delete")
    @PostMapping("/delete")
    @ResponseBody
    public CommonResult delete(@RequestParam("objectName") String objectName) {
        s3Client.deleteObject(DeleteObjectRequest.builder().bucket(BUCKET_NAME).key(objectName).build());
        return CommonResult.success(null);
    }

    private boolean bucketExists(String bucketName) {
        try {
            s3Client.headBucket(r -> r.bucket(bucketName));
            return true;
        } catch (NoSuchBucketException e) {
            return false;
        }
    }

    private BucketPolicyConfigDto createBucketPolicyConfigDto(String bucketName) {
        BucketPolicyConfigDto.Statement statement = BucketPolicyConfigDto.Statement.builder()
                .Effect("Allow")
                .Principal(BucketPolicyConfigDto.Principal.builder().AWS(new String[]{"*"}).build())
                .Action(new String[]{"s3:GetObject"})
                .Resource(new String[]{"arn:aws:s3:::" + bucketName + "/*"})
                .build();
        return BucketPolicyConfigDto.builder()
                .Version("2012-10-17")
                .Statement(CollUtil.toList(statement))
                .build();
    }
}

The generated bucket policy JSON looks like:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {"AWS": ["*"]},
      "Action": ["s3:GetObject"],
      "Resource": ["arn:aws:s3:::simple/*"]
    }
  ]
}

With Swagger enabled, you can test the File upload and File delete endpoints directly from the API documentation UI.

Summary

This article demonstrated RustFS’s powerful console and its integration with SpringBoot, showing that its feature set surpasses MinIO’s console and offering a practical solution for high‑performance distributed file storage.

DockerSpringBootdistributed storageS3 CompatibilityRustFS
macrozheng
Written by

macrozheng

Dedicated to Java tech sharing and dissecting top open-source projects. Topics include Spring Boot, Spring Cloud, Docker, Kubernetes and more. Author’s GitHub project “mall” has 50K+ stars.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.