Backend Development 13 min read

Implementing Chunked and Resumable File Upload with Fast Upload (秒传) in Java

This article explains how to implement fast (MD5‑based) upload, chunked upload, and resumable (break‑point) upload for large files in Java, detailing the Redis‑based status tracking, server‑side file handling with RandomAccessFile or MappedByteBuffer, and the required front‑end and back‑end coordination.

Architect's Tech Stack
Architect's Tech Stack
Architect's Tech Stack
Implementing Chunked and Resumable File Upload with Fast Upload (秒传) in Java

File upload for large files requires chunked and resumable strategies to avoid restarting from the beginning after interruptions.

Fast upload (秒传) works by checking the MD5 of the file on the server; if the same file exists, the server returns a new address without re‑uploading.

The article describes the core logic using Redis to store upload status keyed by file MD5 and a flag indicating completion, and explains how to handle chunk metadata.

Chunked upload splits a file into equal‑size parts (chunks) that are uploaded separately and later reassembled on the server. It is suitable for large files or unstable network conditions.

Resumable upload (断点续传) records the progress of each chunk so that after a failure the client can continue from the last successful chunk, using a .conf file to track completed parts.

The implementation steps include: dividing the file on the client, sending each chunk with its index, creating a .conf file on the server to mark completed chunks, writing chunks to a temporary file using RandomAccessFile or MappedByteBuffer, and finally merging the chunks when all are received.

Code examples show the backend strategies:

@UploadMode(mode = UploadModeEnum.RANDOM_ACCESS)
@Slf4j
public class RandomAccessUploadStrategy extends SliceUploadTemplate {
    @Autowired
    private FilePathUtil filePathUtil;
    @Value("${upload.chunkSize}")
    private long defaultChunkSize;
    @Override
    public boolean upload(FileUploadRequestDTO param) {
        RandomAccessFile accessTmpFile = null;
        try {
            String uploadDirPath = filePathUtil.getPath(param);
            File tmpFile = super.createTmpFile(param);
            accessTmpFile = new RandomAccessFile(tmpFile, "rw");
            long chunkSize = Objects.isNull(param.getChunkSize()) ? defaultChunkSize * 1024 * 1024
                    : param.getChunkSize();
            long offset = chunkSize * param.getChunk();
            accessTmpFile.seek(offset);
            accessTmpFile.write(param.getFile().getBytes());
            boolean isOk = super.checkAndSetUploadProgress(param, uploadDirPath);
            return isOk;
        } catch (IOException e) {
            log.error(e.getMessage(), e);
        } finally {
            FileUtil.close(accessTmpFile);
        }
        return false;
    }
}
@UploadMode(mode = UploadModeEnum.MAPPED_BYTEBUFFER)
@Slf4j
public class MappedByteBufferUploadStrategy extends SliceUploadTemplate {
    @Autowired
    private FilePathUtil filePathUtil;
    @Value("${upload.chunkSize}")
    private long defaultChunkSize;
    @Override
    public boolean upload(FileUploadRequestDTO param) {
        RandomAccessFile tempRaf = null;
        FileChannel fileChannel = null;
        MappedByteBuffer mappedByteBuffer = null;
        try {
            String uploadDirPath = filePathUtil.getPath(param);
            File tmpFile = super.createTmpFile(param);
            tempRaf = new RandomAccessFile(tmpFile, "rw");
            fileChannel = tempRaf.getChannel();
            long chunkSize = Objects.isNull(param.getChunkSize()) ? defaultChunkSize * 1024 * 1024
                    : param.getChunkSize();
            long offset = chunkSize * param.getChunk();
            byte[] fileData = param.getFile().getBytes();
            mappedByteBuffer = fileChannel
                    .map(FileChannel.MapMode.READ_WRITE, offset, fileData.length);
            mappedByteBuffer.put(fileData);
            boolean isOk = super.checkAndSetUploadProgress(param, uploadDirPath);
            return isOk;
        } catch (IOException e) {
            log.error(e.getMessage(), e);
        } finally {
            FileUtil.freedMappedByteBuffer(mappedByteBuffer);
            FileUtil.close(fileChannel);
            FileUtil.close(tempRaf);
        }
        return false;
    }
}
@Slf4j
public abstract class SliceUploadTemplate implements SliceUploadStrategy {
    public abstract boolean upload(FileUploadRequestDTO param);
    protected File createTmpFile(FileUploadRequestDTO param) {
        FilePathUtil filePathUtil = SpringContextHolder.getBean(FilePathUtil.class);
        param.setPath(FileUtil.withoutHeadAndTailDiagonal(param.getPath()));
        String fileName = param.getFile().getOriginalFilename();
        String uploadDirPath = filePathUtil.getPath(param);
        String tempFileName = fileName + "_tmp";
        File tmpDir = new File(uploadDirPath);
        File tmpFile = new File(uploadDirPath, tempFileName);
        if (!tmpDir.exists()) {
            tmpDir.mkdirs();
        }
        return tmpFile;
    }
    @Override
    public FileUploadDTO sliceUpload(FileUploadRequestDTO param) {
        boolean isOk = this.upload(param);
        if (isOk) {
            File tmpFile = this.createTmpFile(param);
            FileUploadDTO fileUploadDTO = this.saveAndFileUploadDTO(param.getFile().getOriginalFilename(), tmpFile);
            return fileUploadDTO;
        }
        String md5 = FileMD5Util.getFileMD5(param.getFile());
        Map
map = new HashMap<>();
        map.put(param.getChunk(), md5);
        return FileUploadDTO.builder().chunkMd5Info(map).build();
    }
    public boolean checkAndSetUploadProgress(FileUploadRequestDTO param, String uploadDirPath) {
        String fileName = param.getFile().getOriginalFilename();
        File confFile = new File(uploadDirPath, fileName + ".conf");
        byte isComplete = 0;
        RandomAccessFile accessConfFile = null;
        try {
            accessConfFile = new RandomAccessFile(confFile, "rw");
            accessConfFile.setLength(param.getChunks());
            accessConfFile.seek(param.getChunk());
            accessConfFile.write(Byte.MAX_VALUE);
            byte[] completeList = FileUtils.readFileToByteArray(confFile);
            isComplete = Byte.MAX_VALUE;
            for (int i = 0; i < completeList.length && isComplete == Byte.MAX_VALUE; i++) {
                isComplete = (byte) (isComplete & completeList[i]);
            }
        } catch (IOException e) {
            log.error(e.getMessage(), e);
        } finally {
            FileUtil.close(accessConfFile);
        }
        return setUploadProgress2Redis(param, uploadDirPath, fileName, confFile, isComplete);
    }
    private boolean setUploadProgress2Redis(FileUploadRequestDTO param, String uploadDirPath, String fileName, File confFile, byte isComplete) {
        RedisUtil redisUtil = SpringContextHolder.getBean(RedisUtil.class);
        if (isComplete == Byte.MAX_VALUE) {
            redisUtil.hset(FileConstant.FILE_UPLOAD_STATUS, param.getMd5(), "true");
            redisUtil.del(FileConstant.FILE_MD5_KEY + param.getMd5());
            confFile.delete();
            return true;
        } else {
            if (!redisUtil.hHasKey(FileConstant.FILE_UPLOAD_STATUS, param.getMd5())) {
                redisUtil.hset(FileConstant.FILE_UPLOAD_STATUS, param.getMd5(), "false");
                redisUtil.set(FileConstant.FILE_MD5_KEY + param.getMd5(), uploadDirPath + FileConstant.FILE_SEPARATORCHAR + fileName + ".conf");
            }
            return false;
        }
    }
    public FileUploadDTO saveAndFileUploadDTO(String fileName, File tmpFile) {
        FileUploadDTO fileUploadDTO = null;
        try {
            fileUploadDTO = renameFile(tmpFile, fileName);
            if (fileUploadDTO.isUploadComplete()) {
                // TODO: save file info to database
            }
        } catch (Exception e) {
            log.error(e.getMessage(), e);
        }
        return fileUploadDTO;
    }
    private FileUploadDTO renameFile(File toBeRenamed, String toFileNewName) {
        FileUploadDTO fileUploadDTO = new FileUploadDTO();
        if (!toBeRenamed.exists() || toBeRenamed.isDirectory()) {
            log.info("File does not exist: {}", toBeRenamed.getName());
            fileUploadDTO.setUploadComplete(false);
            return fileUploadDTO;
        }
        String ext = FileUtil.getExtension(toFileNewName);
        String p = toBeRenamed.getParent();
        String filePath = p + FileConstant.FILE_SEPARATORCHAR + toFileNewName;
        File newFile = new File(filePath);
        boolean uploadFlag = toBeRenamed.renameTo(newFile);
        fileUploadDTO.setMtime(DateUtil.getCurrentTimeStamp());
        fileUploadDTO.setUploadComplete(uploadFlag);
        fileUploadDTO.setPath(filePath);
        fileUploadDTO.setSize(newFile.length());
        fileUploadDTO.setFileExt(ext);
        fileUploadDTO.setFileId(toFileNewName);
        return fileUploadDTO;
    }
}

In summary, successful chunked and resumable uploads require coordination between front‑end and back‑end regarding chunk size and indices, and a reliable storage solution such as FastDFS, HDFS, or object storage like Alibaba OSS.

BackendJavaRedisFile UploadChunked Uploadresumable uploadRandomAccessFileMappedByteBuffer
Architect's Tech Stack
Written by

Architect's Tech Stack

Java backend, microservices, distributed systems, containerized programming, and more.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.