Backend Development 12 min read

Implementing Large File Upload with Chunking, Resume, and Instant Transfer Using Java RandomAccessFile

This article explains how to handle 2 GB video uploads by splitting files into chunks, using breakpoint resume and instant transfer techniques, and leveraging Java's RandomAccessFile together with Spring Boot and Redis to manage upload state, merge chunks, and store the final file.

Code Ape Tech Column
Code Ape Tech Column
Code Ape Tech Column
Implementing Large File Upload with Chunking, Resume, and Instant Transfer Using Java RandomAccessFile

The author introduces a practical solution for uploading large files (around 2 GB) by dividing them into smaller chunks, employing breakpoint resume and instant transfer features commonly offered by cloud storage services.

Key concepts are defined:

File chunking : splitting a large file into smaller pieces for independent upload/download.

Breakpoint resume : each chunk is uploaded in its own thread; if a network error occurs, the upload can continue from the last successful chunk.

Instant transfer : if the server already holds the file (identified by its MD5), the upload is skipped and the existing URI is returned.

The article then focuses on RandomAccessFile , a Java class that extends Object and implements DataInput and DataOutput . It supports random read/write via a file pointer, which can be queried with getFilePointer() and moved with seek(long pos) . Four access modes are described: r , rw , rws , and rwd .

Important API methods are listed:

void seek(long pos)               // set file pointer
native long getFilePointer()      // get current pointer
native long length()              // file length
int readFully(byte[] b)           // fill buffer, block until full or EOF
FileChannel getChannel()         // obtain associated NIO channel
int skipBytes(int n)              // skip n bytes

Because modern JDK NIO memory‑mapped files replace most RandomAccessFile functionality, the author still demonstrates its use for educational purposes.

For the front‑end, the article suggests using JavaScript libraries to split files, compute an MD5 hash for each chunk, and send the hash to the server to check existence. Sample front‑end code shows how to read a file as a binary string, compute MD5, and post the hash via axios :

fileRederInstance.readAsBinaryString(file);
fileRederInstance.addEventListener("load", (e) => {
    let fileBolb = e.target.result;
    fileMD5 = md5(fileBolb);
    const formData = new FormData();
    formData.append("md5", fileMD5);
    axios.post(http + "/fileUpload/checkFileMd5", formData)
        .then((res) => {
            if (res.data.message == "文件已存在") {
                success && success(res);
            } else {
                if (!res.data.data) {
                    // file not uploaded before – start fresh upload
                } else {
                    // partial upload – resume missing chunks
                    chunkArr = res.data.data;
                }
                readChunkMD5();
            }
        })
        .catch((e) => {});
});

Chunk extraction on the client uses the slice method:

const getChunkInfo = (file, currentChunk, chunkSize) => {
    let start = currentChunk * chunkSize;
    let end = Math.min(file.size, start + chunkSize);
    let chunk = file.slice(start, end);
    return { start, end, chunk };
};

On the back‑end, a Spring Boot service with Redis stores upload status and file paths. The controller checks the MD5, returns either the existing file URL, a list of missing chunk indexes, or a flag indicating no prior upload.

/**
 * Verify file MD5
 */
public Result checkFileMd5(String md5) throws IOException {
    Object processingObj = stringRedisTemplate.opsForHash()
        .get(UploadConstants.FILE_UPLOAD_STATUS, md5);
    if (processingObj == null) {
        return Result.ok("该文件没有上传过");
    }
    boolean processing = Boolean.parseBoolean(processingObj.toString());
    String value = stringRedisTemplate.opsForValue()
        .get(UploadConstants.FILE_MD5_KEY + md5);
    if (processing) {
        return Result.ok(value, "文件已存在");
    } else {
        File confFile = new File(value);
        byte[] completeList = FileUtils.readFileToByteArray(confFile);
        List
missChunkList = new LinkedList<>();
        for (int i = 0; i < completeList.length; i++) {
            if (completeList[i] != Byte.MAX_VALUE) {
                missChunkList.add(i);
            }
        }
        return Result.ok(missChunkList, "该文件上传了一部分");
    }
}

After each chunk upload, the server writes a marker byte to a temporary "conf" file using RandomAccessFile , then checks whether all bytes are Byte.MAX_VALUE to determine completion. If complete, Redis is updated with true and the final file path; otherwise, the status remains false and the temporary path is stored.

RandomAccessFile accessConfFile = new RandomAccessFile(confFile, "rw");
accessConfFile.setLength(multipartFileDTO.getChunks());
accessConfFile.seek(multipartFileDTO.getChunk());
accessConfFile.write(Byte.MAX_VALUE);

byte[] completeList = FileUtils.readFileToByteArray(confFile);
byte isComplete = Byte.MAX_VALUE;
for (int i = 0; i < completeList.length && isComplete == Byte.MAX_VALUE; i++) {
    isComplete = (byte) (isComplete & completeList[i]);
}
accessConfFile.close();

if (isComplete == Byte.MAX_VALUE) {
    stringRedisTemplate.opsForHash().put(UploadConstants.FILE_UPLOAD_STATUS, md5, "true");
    stringRedisTemplate.opsForValue().set(UploadConstants.FILE_MD5_KEY + md5, uploadDirPath + "/" + fileName);
} else {
    // keep false status and temporary path
}

Finally, the article reminds readers to support the author’s public account and knowledge‑sharing platform, but the technical content itself provides a complete end‑to‑end guide for large‑file chunked upload, resume, instant transfer, and merging using Java and Spring technologies.

JavaRedisSpring BootFile UploadChunkingResume UploadRandomAccessFile
Code Ape Tech Column
Written by

Code Ape Tech Column

Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.