Implementing Fast File Upload: Instant Transfer, Chunked Upload, and Resume Support in Java
This article explains why traditional whole‑file uploads are unsuitable for large files and introduces three advanced techniques—instant transfer (秒传), chunked upload (分片上传), and resumable upload (断点续传)—along with detailed Java backend implementations using RandomAccessFile and MappedByteBuffer, plus integration tips and server recommendations.
Introduction
Uploading files is a common task, but using a simple byte‑stream upload works only for small files; large files suffer from interruptions that force a restart. The article presents better upload experiences: instant transfer, chunked upload, and resumable upload.
Instant Transfer (秒传)
What is instant transfer
When a file is uploaded, the server first checks its MD5. If a file with the same MD5 already exists, the server returns a new address and the client skips the actual upload. Changing the MD5 (e.g., by modifying the file content) disables instant transfer.
Core logic of instant transfer in this article
a) Use Redis set to store upload status, with the MD5 as the key and a flag indicating completion as the value. b) If the flag is true, a duplicate upload triggers instant transfer; if false, the server records the path of each chunk using a key composed of the MD5 plus a fixed prefix.
Chunked Upload (分片上传)
What is chunked upload
Chunked upload splits a large file into multiple parts (chunks) of a fixed size, uploads each part separately, and finally merges them on the server.
Scenarios for chunked upload
1. Large file upload 2. Poor network conditions where retransmission risk is high
Resumable Upload (断点续传)
What is resumable upload
Resumable upload divides a file into several parts, each handled by a separate thread. If a network failure occurs, the client can continue uploading from the last successful part instead of restarting from the beginning.
Application scenarios
All scenarios suitable for chunked upload also apply to resumable upload.
Core logic of resumable upload
During chunked upload, if the process crashes or the network drops, the client records the progress. The server can provide an API for the client to query already uploaded chunks, allowing continuation from the next unfinished chunk.
Implementation steps
a) Conventional steps:
Split the file into equal‑size chunks.
Initialize a chunked upload task and obtain a unique identifier.
Send each chunk according to a chosen strategy (serial or parallel).
After all chunks are sent, the server verifies completeness and merges the chunks into the original file.
b) Steps used in this article:
Front‑end splits the file into fixed‑size chunks and sends the chunk index and size to the back‑end.
The back‑end creates a .conf file to record chunk status; each uploaded chunk writes a byte value 127 (Byte.MAX_VALUE) at its position, while unwritten positions remain 0 . This file is the core of both instant and resumable upload.
The server calculates the offset from the chunk index and size, then writes the received data to the correct file position.
Backend code for writing files
a) RandomAccessFile implementation
@UploadMode(mode = UploadModeEnum.RANDOM_ACCESS)
@Slf4j
public class RandomAccessUploadStrategy extends SliceUploadTemplate {
@Autowired
private FilePathUtil filePathUtil;
@Value("${upload.chunkSize}")
private long defaultChunkSize;
@Override
public boolean upload(FileUploadRequestDTO param) {
RandomAccessFile accessTmpFile = null;
try {
String uploadDirPath = filePathUtil.getPath(param);
File tmpFile = super.createTmpFile(param);
accessTmpFile = new RandomAccessFile(tmpFile, "rw");
long chunkSize = Objects.isNull(param.getChunkSize()) ? defaultChunkSize * 1024 * 1024 : param.getChunkSize();
long offset = chunkSize * param.getChunk();
accessTmpFile.seek(offset);
accessTmpFile.write(param.getFile().getBytes());
boolean isOk = super.checkAndSetUploadProgress(param, uploadDirPath);
return isOk;
} catch (IOException e) {
log.error(e.getMessage(), e);
} finally {
FileUtil.close(accessTmpFile);
}
return false;
}
}b) MappedByteBuffer implementation
@UploadMode(mode = UploadModeEnum.MAPPED_BYTEBUFFER)
@Slf4j
public class MappedByteBufferUploadStrategy extends SliceUploadTemplate {
@Autowired
private FilePathUtil filePathUtil;
@Value("${upload.chunkSize}")
private long defaultChunkSize;
@Override
public boolean upload(FileUploadRequestDTO param) {
RandomAccessFile tempRaf = null;
FileChannel fileChannel = null;
MappedByteBuffer mappedByteBuffer = null;
try {
String uploadDirPath = filePathUtil.getPath(param);
File tmpFile = super.createTmpFile(param);
tempRaf = new RandomAccessFile(tmpFile, "rw");
fileChannel = tempRaf.getChannel();
long chunkSize = Objects.isNull(param.getChunkSize()) ? defaultChunkSize * 1024 * 1024 : param.getChunkSize();
long offset = chunkSize * param.getChunk();
byte[] fileData = param.getFile().getBytes();
mappedByteBuffer = fileChannel.map(FileChannel.MapMode.READ_WRITE, offset, fileData.length);
mappedByteBuffer.put(fileData);
boolean isOk = super.checkAndSetUploadProgress(param, uploadDirPath);
return isOk;
} catch (IOException e) {
log.error(e.getMessage(), e);
} finally {
FileUtil.freedMappedByteBuffer(mappedByteBuffer);
FileUtil.close(fileChannel);
FileUtil.close(tempRaf);
}
return false;
}
}c) Core template class
@Slf4j
public abstract class SliceUploadTemplate implements SliceUploadStrategy {
public abstract boolean upload(FileUploadRequestDTO param);
protected File createTmpFile(FileUploadRequestDTO param) {
FilePathUtil filePathUtil = SpringContextHolder.getBean(FilePathUtil.class);
param.setPath(FileUtil.withoutHeadAndTailDiagonal(param.getPath()));
String fileName = param.getFile().getOriginalFilename();
String uploadDirPath = filePathUtil.getPath(param);
String tempFileName = fileName + "_tmp";
File tmpDir = new File(uploadDirPath);
if (!tmpDir.exists()) { tmpDir.mkdirs(); }
return new File(uploadDirPath, tempFileName);
}
@Override
public FileUploadDTO sliceUpload(FileUploadRequestDTO param) {
boolean isOk = this.upload(param);
if (isOk) {
File tmpFile = this.createTmpFile(param);
return this.saveAndFileUploadDTO(param.getFile().getOriginalFilename(), tmpFile);
}
String md5 = FileMD5Util.getFileMD5(param.getFile());
Map
map = new HashMap<>();
map.put(param.getChunk(), md5);
return FileUploadDTO.builder().chunkMd5Info(map).build();
}
public boolean checkAndSetUploadProgress(FileUploadRequestDTO param, String uploadDirPath) {
String fileName = param.getFile().getOriginalFilename();
File confFile = new File(uploadDirPath, fileName + ".conf");
byte isComplete = 0;
RandomAccessFile accessConfFile = null;
try {
accessConfFile = new RandomAccessFile(confFile, "rw");
accessConfFile.setLength(param.getChunks());
accessConfFile.seek(param.getChunk());
accessConfFile.write(Byte.MAX_VALUE);
byte[] completeList = FileUtils.readFileToByteArray(confFile);
isComplete = Byte.MAX_VALUE;
for (int i = 0; i < completeList.length && isComplete == Byte.MAX_VALUE; i++) {
isComplete = (byte) (isComplete & completeList[i]);
}
} catch (IOException e) {
log.error(e.getMessage(), e);
} finally {
FileUtil.close(accessConfFile);
}
return setUploadProgress2Redis(param, uploadDirPath, fileName, confFile, isComplete);
}
private boolean setUploadProgress2Redis(FileUploadRequestDTO param, String uploadDirPath, String fileName, File confFile, byte isComplete) {
RedisUtil redisUtil = SpringContextHolder.getBean(RedisUtil.class);
if (isComplete == Byte.MAX_VALUE) {
redisUtil.hset(FileConstant.FILE_UPLOAD_STATUS, param.getMd5(), "true");
redisUtil.del(FileConstant.FILE_MD5_KEY + param.getMd5());
confFile.delete();
return true;
} else {
if (!redisUtil.hHasKey(FileConstant.FILE_UPLOAD_STATUS, param.getMd5())) {
redisUtil.hset(FileConstant.FILE_UPLOAD_STATUS, param.getMd5(), "false");
redisUtil.set(FileConstant.FILE_MD5_KEY + param.getMd5(), uploadDirPath + FileConstant.FILE_SEPARATORCHAR + fileName + ".conf");
}
return false;
}
}
public FileUploadDTO saveAndFileUploadDTO(String fileName, File tmpFile) {
try {
FileUploadDTO dto = renameFile(tmpFile, fileName);
if (dto.isUploadComplete()) {
System.out.println("upload complete !!" + dto.isUploadComplete() + " name=" + fileName);
// TODO: persist file info to DB
}
return dto;
} catch (Exception e) {
log.error(e.getMessage(), e);
}
return null;
}
private FileUploadDTO renameFile(File toBeRenamed, String toFileNewName) {
FileUploadDTO dto = new FileUploadDTO();
if (!toBeRenamed.exists() || toBeRenamed.isDirectory()) {
log.info("File does not exist: {}", toBeRenamed.getName());
dto.setUploadComplete(false);
return dto;
}
String ext = FileUtil.getExtension(toFileNewName);
String p = toBeRenamed.getParent();
String filePath = p + FileConstant.FILE_SEPARATORCHAR + toFileNewName;
File newFile = new File(filePath);
boolean uploadFlag = toBeRenamed.renameTo(newFile);
dto.setMtime(DateUtil.getCurrentTimeStamp());
dto.setUploadComplete(uploadFlag);
dto.setPath(filePath);
dto.setSize(newFile.length());
dto.setFileExt(ext);
dto.setFileId(toFileNewName);
return dto;
}
}Conclusion
Successful chunked upload requires coordination between front‑end and back‑end (consistent chunk size and index). A dedicated file server (e.g., FastDFS, HDFS) is usually needed, but for simple upload/download scenarios, using an object storage service like Alibaba OSS is recommended. OSS is suitable for read‑only storage; heavy delete/modify workloads may be better served by a true file server.
Code Ape Tech Column
Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.