Implementing Instant, Chunked, and Resumable File Uploads in Java
The article explains how to improve large file upload experiences by using instant (MD5‑based) upload, chunked upload, and resumable upload techniques, and provides complete Java backend implementations with Redis tracking, RandomAccessFile and MappedByteBuffer, plus practical deployment recommendations.
Uploading large files directly as a single byte stream is inefficient because interruptions force a restart; the article introduces three better approaches: instant upload (秒传) using MD5 deduplication, chunked upload (分片上传) that splits a file into equal parts, and resumable upload (断点续传) that records progress and continues from the last successful chunk.
Instant Upload
The server checks the MD5 of the incoming file; if the same MD5 already exists, it returns a new URL without re‑uploading. Changing the file content (not just the name) changes the MD5 and disables instant upload.
Chunked Upload
Files are divided into fixed‑size parts (Parts) on the client side and uploaded separately. After all parts are received, the server merges them into the original file. This method is suitable for large files and unstable networks.
Resumable Upload
Resumable upload builds on chunked upload by persisting the upload state. A .conf file records which chunks have been uploaded (using Byte.MAX_VALUE as a marker). The client can query this state to continue uploading only missing chunks.
Core Backend Logic
The upload status is stored in Redis using the file MD5 as a key. When a chunk finishes, the server updates the .conf file and sets a Redis flag indicating whether the whole file is complete.
RandomAccessFile Implementation
@UploadMode(mode = UploadModeEnum.RANDOM_ACCESS)
@Slf4j
public class RandomAccessUploadStrategy extends SliceUploadTemplate {
@Autowired
private FilePathUtil filePathUtil;
@Value("${upload.chunkSize}")
private long defaultChunkSize;
@Override
public boolean upload(FileUploadRequestDTO param) {
RandomAccessFile accessTmpFile = null;
try {
String uploadDirPath = filePathUtil.getPath(param);
File tmpFile = super.createTmpFile(param);
accessTmpFile = new RandomAccessFile(tmpFile, "rw");
long chunkSize = Objects.isNull(param.getChunkSize()) ? defaultChunkSize * 1024 * 1024 : param.getChunkSize();
long offset = chunkSize * param.getChunk();
accessTmpFile.seek(offset);
accessTmpFile.write(param.getFile().getBytes());
boolean isOk = super.checkAndSetUploadProgress(param, uploadDirPath);
return isOk;
} catch (IOException e) {
log.error(e.getMessage(), e);
} finally {
FileUtil.close(accessTmpFile);
}
return false;
}
}MappedByteBuffer Implementation
@UploadMode(mode = UploadModeEnum.MAPPED_BYTEBUFFER)
@Slf4j
public class MappedByteBufferUploadStrategy extends SliceUploadTemplate {
@Autowired
private FilePathUtil filePathUtil;
@Value("${upload.chunkSize}")
private long defaultChunkSize;
@Override
public boolean upload(FileUploadRequestDTO param) {
RandomAccessFile tempRaf = null;
FileChannel fileChannel = null;
MappedByteBuffer mappedByteBuffer = null;
try {
String uploadDirPath = filePathUtil.getPath(param);
File tmpFile = super.createTmpFile(param);
tempRaf = new RandomAccessFile(tmpFile, "rw");
fileChannel = tempRaf.getChannel();
long chunkSize = Objects.isNull(param.getChunkSize()) ? defaultChunkSize * 1024 * 1024 : param.getChunkSize();
long offset = chunkSize * param.getChunk();
byte[] fileData = param.getFile().getBytes();
mappedByteBuffer = fileChannel.map(FileChannel.MapMode.READ_WRITE, offset, fileData.length);
mappedByteBuffer.put(fileData);
boolean isOk = super.checkAndSetUploadProgress(param, uploadDirPath);
return isOk;
} catch (IOException e) {
log.error(e.getMessage(), e);
} finally {
FileUtil.freedMappedByteBuffer(mappedByteBuffer);
FileUtil.close(fileChannel);
FileUtil.close(tempRaf);
}
return false;
}
}Template and Progress Tracking
The abstract SliceUploadTemplate defines the upload workflow, creates temporary files, merges chunks, and updates progress. The checkAndSetUploadProgress method writes a byte marker for each chunk into the .conf file, reads the whole file to determine if all chunks are complete, and synchronizes this state to Redis.
Final Remarks
Successful chunked upload requires the client and server to agree on chunk size and ordering. For production, a dedicated file server such as FastDFS or HDFS is recommended; alternatively, Alibaba OSS can be used for simple upload/download scenarios, though it is object storage and may not suit heavy delete/modify workloads.
Java Architect Essentials
Committed to sharing quality articles and tutorials to help Java programmers progress from junior to mid-level to senior architect. We curate high-quality learning resources, interview questions, videos, and projects from across the internet to help you systematically improve your Java architecture skills. Follow and reply '1024' to get Java programming resources. Learn together, grow together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.