Implementing Fast File Upload: Instant Upload, Chunked Upload, and Resume Upload with Java Backend
This article explains how to improve large file upload experience by using instant (MD5‑based) upload, chunked (slice) upload, and breakpoint‑resume upload, detailing the Redis‑based state management and providing Java backend implementations with RandomAccessFile and MappedByteBuffer.
Instant Upload
Instant upload ("秒传") works by calculating the file's MD5 on the client; if the server already stores a file with the same MD5, it returns a new URL without re‑uploading the data. Changing the file content (not just the name) changes the MD5 and disables instant upload.
Core Logic
a) Store upload status in Redis using the file MD5 as the key and a flag indicating completion.
b) When the flag is true, subsequent uploads of the same file trigger instant upload; when false, the server records the path of each chunk using a composite key (MD5 + prefix).
Chunked Upload
Chunked upload splits a large file into fixed‑size parts (chunks) that are uploaded separately and later reassembled on the server.
Scenarios
1. Large file uploads. 2. Unstable network conditions where retransmission of failed chunks is needed.
Resume Upload
What It Is
Resume upload divides a file into several parts, each uploaded by a separate thread; if a network failure occurs, the client can continue from the last successful part instead of restarting from the beginning.
Application Scenarios
Resume upload is essentially a derivative of chunked upload, so it applies to all chunked upload use cases.
Core Logic
The client records the progress of each uploaded chunk. The server provides an interface to query which chunks have already been stored, allowing the client to resume from the next missing chunk.
Implementation Steps
a) Conventional steps:
Split the file into equal‑size chunks.
Initialize a chunked upload task and obtain a unique identifier.
Upload each chunk sequentially or in parallel.
After all chunks are received, the server merges them into the original file.
b) Steps used in this article:
The client sends each chunk with its index and size.
The server creates a .conf file whose length equals the total number of chunks; each uploaded chunk writes a byte value 127 (Byte.MAX_VALUE) at its position, while unwritten positions remain 0 .
When a request arrives, the server calculates the offset using chunkSize * chunkIndex and writes the received data at that offset.
Code Implementation
The front end uses Baidu's WebUploader plugin for chunking (see the official guide). The back end provides two implementations for writing chunks:
1. RandomAccessFile Implementation
@UploadMode(mode = UploadModeEnum.RANDOM_ACCESS)
@Slf4j
public class RandomAccessUploadStrategy extends SliceUploadTemplate {
@Autowired
private FilePathUtil filePathUtil;
@Value("${upload.chunkSize}")
private long defaultChunkSize;
@Override
public boolean upload(FileUploadRequestDTO param) {
RandomAccessFile accessTmpFile = null;
try {
String uploadDirPath = filePathUtil.getPath(param);
File tmpFile = super.createTmpFile(param);
accessTmpFile = new RandomAccessFile(tmpFile, "rw");
long chunkSize = Objects.isNull(param.getChunkSize()) ? defaultChunkSize * 1024 * 1024 : param.getChunkSize();
long offset = chunkSize * param.getChunk();
accessTmpFile.seek(offset);
accessTmpFile.write(param.getFile().getBytes());
boolean isOk = super.checkAndSetUploadProgress(param, uploadDirPath);
return isOk;
} catch (IOException e) {
log.error(e.getMessage(), e);
} finally {
FileUtil.close(accessTmpFile);
}
return false;
}
}2. MappedByteBuffer Implementation
@UploadMode(mode = UploadModeEnum.MAPPED_BYTEBUFFER)
@Slf4j
public class MappedByteBufferUploadStrategy extends SliceUploadTemplate {
@Autowired
private FilePathUtil filePathUtil;
@Value("${upload.chunkSize}")
private long defaultChunkSize;
@Override
public boolean upload(FileUploadRequestDTO param) {
RandomAccessFile tempRaf = null;
FileChannel fileChannel = null;
MappedByteBuffer mappedByteBuffer = null;
try {
String uploadDirPath = filePathUtil.getPath(param);
File tmpFile = super.createTmpFile(param);
tempRaf = new RandomAccessFile(tmpFile, "rw");
fileChannel = tempRaf.getChannel();
long chunkSize = Objects.isNull(param.getChunkSize()) ? defaultChunkSize * 1024 * 1024 : param.getChunkSize();
long offset = chunkSize * param.getChunk();
byte[] fileData = param.getFile().getBytes();
mappedByteBuffer = fileChannel.map(FileChannel.MapMode.READ_WRITE, offset, fileData.length);
mappedByteBuffer.put(fileData);
boolean isOk = super.checkAndSetUploadProgress(param, uploadDirPath);
return isOk;
} catch (IOException e) {
log.error(e.getMessage(), e);
} finally {
FileUtil.freedMappedByteBuffer(mappedByteBuffer);
FileUtil.close(fileChannel);
FileUtil.close(tempRaf);
}
return false;
}
}3. Core Template Class
@Slf4j
public abstract class SliceUploadTemplate implements SliceUploadStrategy {
public abstract boolean upload(FileUploadRequestDTO param);
protected File createTmpFile(FileUploadRequestDTO param) {
FilePathUtil filePathUtil = SpringContextHolder.getBean(FilePathUtil.class);
param.setPath(FileUtil.withoutHeadAndTailDiagonal(param.getPath()));
String fileName = param.getFile().getOriginalFilename();
String uploadDirPath = filePathUtil.getPath(param);
String tempFileName = fileName + "_tmp";
File tmpDir = new File(uploadDirPath);
File tmpFile = new File(uploadDirPath, tempFileName);
if (!tmpDir.exists()) {
tmpDir.mkdirs();
}
return tmpFile;
}
@Override
public FileUploadDTO sliceUpload(FileUploadRequestDTO param) {
boolean isOk = this.upload(param);
if (isOk) {
File tmpFile = this.createTmpFile(param);
FileUploadDTO fileUploadDTO = this.saveAndFileUploadDTO(param.getFile().getOriginalFilename(), tmpFile);
return fileUploadDTO;
}
String md5 = FileMD5Util.getFileMD5(param.getFile());
Map
map = new HashMap<>();
map.put(param.getChunk(), md5);
return FileUploadDTO.builder().chunkMd5Info(map).build();
}
public boolean checkAndSetUploadProgress(FileUploadRequestDTO param, String uploadDirPath) {
String fileName = param.getFile().getOriginalFilename();
File confFile = new File(uploadDirPath, fileName + ".conf");
byte isComplete = 0;
RandomAccessFile accessConfFile = null;
try {
accessConfFile = new RandomAccessFile(confFile, "rw");
accessConfFile.setLength(param.getChunks());
accessConfFile.seek(param.getChunk());
accessConfFile.write(Byte.MAX_VALUE);
byte[] completeList = FileUtils.readFileToByteArray(confFile);
isComplete = Byte.MAX_VALUE;
for (int i = 0; i < completeList.length && isComplete == Byte.MAX_VALUE; i++) {
isComplete = (byte) (isComplete & completeList[i]);
}
} catch (IOException e) {
log.error(e.getMessage(), e);
} finally {
FileUtil.close(accessConfFile);
}
return setUploadProgress2Redis(param, uploadDirPath, fileName, confFile, isComplete);
}
private boolean setUploadProgress2Redis(FileUploadRequestDTO param, String uploadDirPath, String fileName, File confFile, byte isComplete) {
RedisUtil redisUtil = SpringContextHolder.getBean(RedisUtil.class);
if (isComplete == Byte.MAX_VALUE) {
redisUtil.hset(FileConstant.FILE_UPLOAD_STATUS, param.getMd5(), "true");
redisUtil.del(FileConstant.FILE_MD5_KEY + param.getMd5());
confFile.delete();
return true;
} else {
if (!redisUtil.hHasKey(FileConstant.FILE_UPLOAD_STATUS, param.getMd5())) {
redisUtil.hset(FileConstant.FILE_UPLOAD_STATUS, param.getMd5(), "false");
redisUtil.set(FileConstant.FILE_MD5_KEY + param.getMd5(), uploadDirPath + FileConstant.FILE_SEPARATORCHAR + fileName + ".conf");
}
return false;
}
}
public FileUploadDTO saveAndFileUploadDTO(String fileName, File tmpFile) {
FileUploadDTO fileUploadDTO = null;
try {
fileUploadDTO = renameFile(tmpFile, fileName);
if (fileUploadDTO.isUploadComplete()) {
// TODO: persist file metadata to database
}
} catch (Exception e) {
log.error(e.getMessage(), e);
}
return fileUploadDTO;
}
private FileUploadDTO renameFile(File toBeRenamed, String toFileNewName) {
FileUploadDTO fileUploadDTO = new FileUploadDTO();
if (!toBeRenamed.exists() || toBeRenamed.isDirectory()) {
log.info("File does not exist: {}", toBeRenamed.getName());
fileUploadDTO.setUploadComplete(false);
return fileUploadDTO;
}
String ext = FileUtil.getExtension(toFileNewName);
String parent = toBeRenamed.getParent();
String filePath = parent + FileConstant.FILE_SEPARATORCHAR + toFileNewName;
File newFile = new File(filePath);
boolean uploadFlag = toBeRenamed.renameTo(newFile);
fileUploadDTO.setMtime(DateUtil.getCurrentTimeStamp());
fileUploadDTO.setUploadComplete(uploadFlag);
fileUploadDTO.setPath(filePath);
fileUploadDTO.setSize(newFile.length());
fileUploadDTO.setFileExt(ext);
fileUploadDTO.setFileId(toFileNewName);
return fileUploadDTO;
}
}Conclusion
Successful chunked upload requires strict coordination between front‑end and back‑end regarding chunk size and index. A dedicated file server (e.g., FastDFS, HDFS) is usually needed; otherwise, object storage services like Alibaba OSS can be used for simple upload/download scenarios, though they are less suitable for frequent deletions or modifications.
In a test environment (4‑core CPU, 8 GB RAM), uploading a 24 GB file took about 30 minutes, with most of the time spent on client‑side MD5 calculation while the back‑end write speed remained fast.
For teams that prefer not to maintain their own file server, Alibaba OSS provides a convenient form‑based upload endpoint that offloads the upload pressure to the cloud service.
Self‑Promotion
The author has compiled three technical columns (Spring Cloud, Spring Boot, MyBatis) into PDFs that can be obtained by following the public account "码猿技术专栏" and replying with the respective keywords.
If this article helped you, please like, view, share, and bookmark it; your support motivates further content creation.
Code Ape Tech Column
Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.