Implementing Fast File Upload: Instant Upload, Chunked Upload, and Resume Upload with Java Backend
This article explains the concepts and implementation details of instant (秒传) upload, chunked (分片) upload, and breakpoint resume upload, providing Java backend code using Redis, RandomAccessFile, and MappedByteBuffer to achieve efficient large‑file transfer with MD5 deduplication and progress tracking.
File upload for large files can be optimized using three techniques: instant upload (秒传), chunked upload (分片上传), and breakpoint resume upload (断点续传). The article first describes instant upload, where the server checks the file's MD5 hash and, if the file already exists, returns a new address without re‑uploading.
Instant Upload
The core logic stores the upload status in Redis using the file MD5 as the key. If the status flag is true, the server triggers instant upload; otherwise it records the chunk file path with a prefixed key.
Chunked Upload
Chunked upload splits a large file into equal-sized parts (Parts) and uploads each part separately. After all parts are uploaded, the server merges them into the original file. This method is suitable for large files and unstable network conditions.
Breakpoint Resume Upload
Breakpoint resume upload divides the file into parts and allows each part to be uploaded by a separate thread. If a network failure occurs, the client can resume from the last successful part without restarting the whole upload.
Backend Implementation
The backend provides two concrete strategies for writing file chunks:
1. RandomAccessFile Strategy
@UploadMode(mode = UploadModeEnum.RANDOM_ACCESS)
@Slf4j
public class RandomAccessUploadStrategy extends SliceUploadTemplate {
@Autowired
private FilePathUtil filePathUtil;
@Value("${upload.chunkSize}")
private long defaultChunkSize;
@Override
public boolean upload(FileUploadRequestDTO param) {
RandomAccessFile accessTmpFile = null;
try {
String uploadDirPath = filePathUtil.getPath(param);
File tmpFile = super.createTmpFile(param);
accessTmpFile = new RandomAccessFile(tmpFile, "rw");
long chunkSize = Objects.isNull(param.getChunkSize()) ? defaultChunkSize * 1024 * 1024 : param.getChunkSize();
long offset = chunkSize * param.getChunk();
accessTmpFile.seek(offset);
accessTmpFile.write(param.getFile().getBytes());
boolean isOk = super.checkAndSetUploadProgress(param, uploadDirPath);
return isOk;
} catch (IOException e) {
log.error(e.getMessage(), e);
} finally {
FileUtil.close(accessTmpFile);
}
return false;
}
}2. MappedByteBuffer Strategy
@UploadMode(mode = UploadModeEnum.MAPPED_BYTEBUFFER)
@Slf4j
public class MappedByteBufferUploadStrategy extends SliceUploadTemplate {
@Autowired
private FilePathUtil filePathUtil;
@Value("${upload.chunkSize}")
private long defaultChunkSize;
@Override
public boolean upload(FileUploadRequestDTO param) {
RandomAccessFile tempRaf = null;
FileChannel fileChannel = null;
MappedByteBuffer mappedByteBuffer = null;
try {
String uploadDirPath = filePathUtil.getPath(param);
File tmpFile = super.createTmpFile(param);
tempRaf = new RandomAccessFile(tmpFile, "rw");
fileChannel = tempRaf.getChannel();
long chunkSize = Objects.isNull(param.getChunkSize()) ? defaultChunkSize * 1024 * 1024 : param.getChunkSize();
long offset = chunkSize * param.getChunk();
byte[] fileData = param.getFile().getBytes();
mappedByteBuffer = fileChannel.map(FileChannel.MapMode.READ_WRITE, offset, fileData.length);
mappedByteBuffer.put(fileData);
boolean isOk = super.checkAndSetUploadProgress(param, uploadDirPath);
return isOk;
} catch (IOException e) {
log.error(e.getMessage(), e);
} finally {
FileUtil.freedMappedByteBuffer(mappedByteBuffer);
FileUtil.close(fileChannel);
FileUtil.close(tempRaf);
}
return false;
}
}3. Core Template Class
@Slf4j
public abstract class SliceUploadTemplate implements SliceUploadStrategy {
public abstract boolean upload(FileUploadRequestDTO param);
protected File createTmpFile(FileUploadRequestDTO param) { /* ... */ }
@Override
public FileUploadDTO sliceUpload(FileUploadRequestDTO param) { /* ... */ }
public boolean checkAndSetUploadProgress(FileUploadRequestDTO param, String uploadDirPath) { /* ... */ }
private boolean setUploadProgress2Redis(FileUploadRequestDTO param, String uploadDirPath, String fileName, File confFile, byte isComplete) { /* ... */ }
public FileUploadDTO saveAndFileUploadDTO(String fileName, File tmpFile) { /* ... */ }
private FileUploadDTO renameFile(File toBeRenamed, String toFileNewName) { /* ... */ }
}The upload progress is recorded in a .conf file where each byte represents a chunk; a value of Byte.MAX_VALUE (127) indicates a completed chunk. The progress is also synchronized to Redis to allow distributed coordination.
Conclusion
Successful chunked upload requires consistent chunk size and numbering between client and server. For production, a dedicated file server (e.g., FastDFS, HDFS) or object storage like Alibaba OSS can be used, though OSS is less suitable for frequent deletions or modifications.
Code Ape Tech Column
Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.