Frontend Development 12 min read

Large File Upload: Principles, Implementation, and Optimizations

This article explains what constitutes a large file, contrasts its upload challenges with ordinary files, outlines the chunked upload workflow for both front‑end and back‑end, and provides practical code examples and optimization techniques such as resumable uploads, instant upload, progress tracking, and pause/resume functionality.

JD Tech Talk
JD Tech Talk
JD Tech Talk
Large File Upload: Principles, Implementation, and Optimizations

Large files are generally defined as files larger than 100 MB, which cannot be sent easily via standard email attachments limited to 20‑50 MB, leading to distinct challenges in transmission speed and reliability.

When uploading ordinary files, only two points need attention: specifying the upload endpoint and setting the request header Content-Type: multipart/form-data to send the file as a binary stream.

Uploading large files introduces additional problems such as request timeout limits, network instability causing retries, HTTP/1.1 head‑of‑line blocking, and lack of progress indication.

The core principle for large‑file upload is to split the file into fixed‑size chunks, upload each chunk independently, and then reassemble them on the server. The front‑end workflow is: obtain the file → slice into chunks → upload each chunk.

Key optimization points include resumable upload (break‑point continuation), instant upload (skip already uploaded files), and displaying upload progress.

On the back‑end, received chunks are stored under a directory identified by a unique identifier , and after all chunks arrive they are merged into the final file.

Important parameters:

identifier : a unique identifier for the file, derived from filename + size or a content hash.

chunkNumber : the index of the current chunk.

totalChunks : total number of chunks.

Example front‑end chunking code:

// Get identifier (same for the same file)
function createIdentifier(file) {
    return file.name + file.size;
}
let file = document.querySelector('[name=file]').files[0];
const LENGTH = 1024 * 1024; // 1 MB
let chunks = slice(file, LENGTH);
let identifier = createIdentifier(file);
let tasks = [];
chunks.forEach((chunk, index) => {
    let fd = new FormData();
    fd.append('file', chunk);
    fd.append('identifier', identifier);
    fd.append('chunkNumber', index + 1);
    fd.append('totalChunks', chunks.length);
    tasks.push(post('/mkblk.php', fd));
});
Promise.all(tasks).then(() => {
    let fd = new FormData();
    fd.append('identifier', identifier);
    fd.append('totalChunks', chunks.length);
    post('/mkfile.php', fd).then(res => console.log(res));
});

Corresponding back‑end PHP code for storing chunks and merging them:

// mkblk.php
$identifier = $_POST['identifier'];
$path = './upload/' . $identifier;
if (!is_dir($path)) { mkdir($path); }
$filename = $path . '/' . $_POST['chunkNumber'];
move_uploaded_file($_FILES['file']['tmp_name'], $filename);

// mkfile.php
$identifier = $_POST['identifier'];
$totalChunks = (int)$_POST['totalChunks'];
$finalFile = "./upload/{$identifier}/file.jpg";
for ($i = 1; $i <= $totalChunks; $i++) {
    $chunkPath = "./upload/{$identifier}/{$i}";
    $content = file_get_contents($chunkPath);
    $mode = file_exists($finalFile) ? 'a' : 'w+';
    $fd = fopen($finalFile, $mode);
    fwrite($fd, $content);
    fclose($fd);
}

Resumable upload is achieved by recording successfully uploaded chunk indices (e.g., in localStorage ) and skipping them on subsequent attempts. Sample front‑end resumable code:

// Retrieve uploaded chunk record
function getUploadSliceRecord(context) {
    let record = localStorage.getItem(context);
    return record ? JSON.parse(record) : [];
}
// Save uploaded chunk record
function saveUploadSliceRecord(context, sliceIndex) {
    let list = getUploadSliceRecord(context);
    list.push(sliceIndex);
    localStorage.setItem(context, JSON.stringify(list));
}
let context = createContext(file);
let record = getUploadSliceRecord(context);
let tasks = [];
chunks.forEach((chunk, index) => {
    if (record.includes(index)) return;
    let fd = new FormData();
    fd.append('file', chunk);
    fd.append('context', context);
    fd.append('chunk', index + 1);
    let task = post('/mkblk.php', fd).then(() => {
        saveUploadSliceRecord(context, index);
    });
    tasks.push(task);
});
// ... after all tasks, call mkfile as before

Additional considerations include cleaning up chunk files after successful merging, handling chunk expiration, and implementing instant upload by checking if the server already possesses the file based on its hash.

For progress monitoring and pause/resume, the xhr.upload progress event can be used, and xhr.abort can cancel ongoing uploads.

Several mature solutions exist, such as Qiniu SDK and Tencent Cloud SDK. A recommended Vue component is vue-simple-uploader , which supports chunked uploads, resumable uploads, instant upload, progress display, and more.

In summary, the article introduces large‑file concepts, contrasts them with ordinary files, details the chunked upload workflow, provides practical front‑end and back‑end code, and points to a ready‑made Vue uploader component for production use.

BackendfrontendVueChunked Uploadresumable uploadlarge file upload
JD Tech Talk
Written by

JD Tech Talk

Official JD Tech public account delivering best practices and technology innovation.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.