Backend Development 11 min read

Implementing Fast File Upload: Instant Transfer, Chunked Upload, and Resume Support in Java Backend

This article explains various backend file upload techniques—including instant (秒传) transfer, chunked (分片) upload, and breakpoint resume—detailing their principles, Redis-based state tracking, and providing Java implementations using RandomAccessFile, MappedByteBuffer, and a slice upload template.

Selected Java Interview Questions
Selected Java Interview Questions
Selected Java Interview Questions
Implementing Fast File Upload: Instant Transfer, Chunked Upload, and Resume Support in Java Backend

File uploading is a common challenge; for small files a simple byte‑stream upload works, but large files require more sophisticated approaches to avoid restarting after interruptions. The article introduces three methods: instant transfer (秒传), chunked upload (分片上传), and breakpoint resume (断点续传).

Instant Transfer relies on MD5 checksum verification. If the server already stores a file with the same MD5, it returns a new address without re‑uploading. Changing the file’s MD5 (e.g., by modifying its content) prevents instant transfer.

The core logic uses Redis to store upload status, with the MD5 as the key and a flag indicating completion.

Chunked Upload splits a large file into equal‑size parts (Parts) and uploads each separately. After all parts are uploaded, the server merges them back into the original file. This method is suitable for large files and unreliable network conditions.

Breakpoint Resume extends chunked upload by allowing interrupted uploads to continue from the last successful part. The server records progress in a .conf file and synchronizes state with Redis.

The implementation details include two backend strategies:

@UploadMode(mode = UploadModeEnum.RANDOM_ACCESS)
@Slf4j
public class RandomAccessUploadStrategy extends SliceUploadTemplate {
    @Autowired
    private FilePathUtil filePathUtil;
    @Value("${upload.chunkSize}")
    private long defaultChunkSize;
    @Override
    public boolean upload(FileUploadRequestDTO param) {
        RandomAccessFile accessTmpFile = null;
        try {
            String uploadDirPath = filePathUtil.getPath(param);
            File tmpFile = super.createTmpFile(param);
            accessTmpFile = new RandomAccessFile(tmpFile, "rw");
            long chunkSize = Objects.isNull(param.getChunkSize()) ? defaultChunkSize * 1024 * 1024 : param.getChunkSize();
            long offset = chunkSize * param.getChunk();
            accessTmpFile.seek(offset);
            accessTmpFile.write(param.getFile().getBytes());
            boolean isOk = super.checkAndSetUploadProgress(param, uploadDirPath);
            return isOk;
        } catch (IOException e) {
            log.error(e.getMessage(), e);
        } finally {
            FileUtil.close(accessTmpFile);
        }
        return false;
    }
}
@UploadMode(mode = UploadModeEnum.MAPPED_BYTEBUFFER)
@Slf4j
public class MappedByteBufferUploadStrategy extends SliceUploadTemplate {
    @Autowired
    private FilePathUtil filePathUtil;
    @Value("${upload.chunkSize}")
    private long defaultChunkSize;
    @Override
    public boolean upload(FileUploadRequestDTO param) {
        RandomAccessFile tempRaf = null;
        FileChannel fileChannel = null;
        MappedByteBuffer mappedByteBuffer = null;
        try {
            String uploadDirPath = filePathUtil.getPath(param);
            File tmpFile = super.createTmpFile(param);
            tempRaf = new RandomAccessFile(tmpFile, "rw");
            fileChannel = tempRaf.getChannel();
            long chunkSize = Objects.isNull(param.getChunkSize()) ? defaultChunkSize * 1024 * 1024 : param.getChunkSize();
            long offset = chunkSize * param.getChunk();
            byte[] fileData = param.getFile().getBytes();
            mappedByteBuffer = fileChannel.map(FileChannel.MapMode.READ_WRITE, offset, fileData.length);
            mappedByteBuffer.put(fileData);
            boolean isOk = super.checkAndSetUploadProgress(param, uploadDirPath);
            return isOk;
        } catch (IOException e) {
            log.error(e.getMessage(), e);
        } finally {
            FileUtil.freedMappedByteBuffer(mappedByteBuffer);
            FileUtil.close(fileChannel);
            FileUtil.close(tempRaf);
        }
        return false;
    }
}

A shared abstract template SliceUploadTemplate provides common utilities such as temporary file creation, progress checking, Redis state persistence, and final file renaming.

The article concludes that successful chunked uploads require consistent chunk size and numbering between frontend and backend, and that a dedicated file server (e.g., FastDFS, HDFS) is ideal. For simpler upload/download needs, Alibaba Cloud OSS can be used, though it is object storage and less suited for frequent deletions or modifications.

backendJavaRedisFile UploadChunked UploadResume Upload
Selected Java Interview Questions
Written by

Selected Java Interview Questions

A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.