Fast, Resumable Large File Uploads with Vue & Express

This article walks through a complete Vue‑and‑Express solution for uploading massive files, detailing chunked splitting, hash‑based instant upload detection, resumable transfers, concurrency control, manual abort handling, and server‑side merging using streams, providing ready‑to‑use code snippets and performance optimizations.

Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Fast, Resumable Large File Uploads with Vue & Express

1. What We Aim to Achieve

The core feature list includes:

Chunked Upload : Split a large file into fixed‑size pieces to avoid single‑request timeouts.

Instant Upload (秒传) : If the server already has the complete file, return success without re‑uploading.

Resumable Upload : After a page refresh or interruption, only upload missing chunks.

Concurrency Control : Limit the number of simultaneous chunk uploads to protect the browser and server.

Manual Abort : Allow users to stop the upload at any time while preserving already uploaded chunks.

2. Full Process Breakdown

The workflow can be summarized in five steps:

用户选择文件 → 前端分片+算哈希 → 校验文件状态(秒传/断点续传) → 并发上传分片 → 后端合并分片

Step 1: File Selection (Frontend Trigger)

Use a native <input type="file"> element and handle the onchange event to start the process.

<template>
  <div class="upload-container">
    <h2>大文件上传演示</h2>
    <input @change="handleUpload" type="file" class="file-input" />
    <!-- Show abort button only while uploading -->
    <button @click="abortUpload" v-if="isUploading" class="abort-btn">中断上传</button>
  </div>
</template>
<script setup>
import { ref } from "vue";
const isUploading = ref(false);
const abortControllers = ref([]);
const handleUpload = async (e) => {
  const file = e.target.files[0];
  if (!file) return;
  // Subsequent core logic: chunking, hashing, verification …
};
</script>
<style scoped>
.upload-container { margin: 20px; }
.file-input { margin-right: 10px; }
.abort-btn { padding: 4px 8px; background: #ff4444; color: white; border: none; border-radius: 4px; }
</style>

Step 2: Chunking & Hash Calculation

Split the file using File.slice() into 1 MB chunks and compute a unique hash with spark‑md5. The hash is calculated by sampling the first and last chunks fully and three 2‑byte samples from each middle chunk, dramatically speeding up the process for GB‑size files.

// Chunk size (adjustable)
const CHUNK_SIZE = 1024 * 1024;
/** Generate an array of Blob chunks */
const createChunks = (file) => {
  let cur = 0;
  const chunks = [];
  while (cur < file.size) {
    const blob = file.slice(cur, cur + CHUNK_SIZE);
    chunks.push(blob);
    cur += CHUNK_SIZE;
  }
  return chunks;
};
import sparkMD5 from "spark-md5";
/** Compute file hash with sampling */
const calHash = (chunks) => {
  return new Promise((resolve) => {
    const spark = new sparkMD5.ArrayBuffer();
    const fileReader = new FileReader();
    const targets = [];
    // Sample strategy: full first/last chunk, 2‑byte samples from middle chunks
    chunks.forEach((chunk, index) => {
      if (index === 0 || index === chunks.length - 1) {
        targets.push(chunk);
      } else {
        targets.push(chunk.slice(0, 2));
        targets.push(chunk.slice(CHUNK_SIZE / 2, CHUNK_SIZE / 2 + 2));
        targets.push(chunk.slice(CHUNK_SIZE - 2, CHUNK_SIZE));
      }
    });
    fileReader.readAsArrayBuffer(new Blob(targets));
    fileReader.onload = (e) => {
      spark.append(e.target.result);
      resolve(spark.end());
    };
  });
};

Step 3: Server‑Side Verification

After obtaining the hash, the frontend sends a /verify request. The backend checks whether a complete file already exists (instant upload) and which chunks have been uploaded (resumable upload).

const express = require("express");
const path = require("path");
const fse = require("fs-extra");
const cors = require("cors");
const bodyParser = require("body-parser");
const app = express();
app.use(cors());
app.use(bodyParser.json());
const UPLOAD_DIR = path.resolve(__dirname, "uploads");
 fse.ensureDirSync(UPLOAD_DIR);
const extractExt = (fileName) => fileName.slice(fileName.lastIndexOf('.'));
app.post("/verify", async (req, res) => {
  const { fileHash, fileName } = req.body;
  const completeFilePath = path.resolve(UPLOAD_DIR, `${fileHash}${extractExt(fileName)}`);
  if (fse.existsSync(completeFilePath)) {
    return res.json({ status: true, data: { shouldUpload: false } });
  }
  const chunkDir = path.resolve(UPLOAD_DIR, fileHash);
  const existChunks = fse.existsSync(chunkDir) ? await fse.readdir(chunkDir) : [];
  res.json({ status: true, data: { shouldUpload: true, existChunks } });
});
app.listen(3000, () => console.log("Server running at http://localhost:3000"));

Step 4: Chunk Upload with Concurrency Control

Generate FormData for each chunk, filter out already uploaded ones, and upload using a request pool limited to six concurrent requests. Abort controllers allow manual interruption.

/** Upload chunks with concurrency limit */
const uploadWithConcurrencyControl = async (formDatas) => {
  const MAX_CONCURRENT = 6;
  let currentIndex = 0;
  const taskPool = [];
  while (currentIndex < formDatas.length) {
    const controller = new AbortController();
    const { signal } = controller;
    abortControllers.value.push(controller);
    const task = fetch("http://localhost:3000/upload", {
      method: "POST",
      body: formDatas[currentIndex],
      signal,
    })
      .then((res) => {
        taskPool.splice(taskPool.indexOf(task), 1);
        abortControllers.value = abortControllers.value.filter(c => c !== controller);
        return res;
      })
      .catch((err) => {
        if (err.name !== "AbortError") console.error("Chunk upload failed:", err);
        taskPool.splice(taskPool.indexOf(task), 1);
        abortControllers.value = abortControllers.value.filter(c => c !== controller);
      });
    taskPool.push(task);
    if (taskPool.length === MAX_CONCURRENT) await Promise.race(taskPool);
    currentIndex++;
  }
  await Promise.all(taskPool);
  mergeRequest();
};

Manual Abort

/** Abort all ongoing uploads */
const abortUpload = () => {
  if (!isUploading.value) return;
  abortControllers.value.forEach(c => c.abort());
  abortControllers.value = [];
  isUploading.value = false;
  alert("上传已中断,下次可继续上传");
};

Step 5: Server‑Side Chunk Reception and Merging

Chunks are received via /upload, saved under a temporary directory named after the file hash, and later merged in order using streams to keep memory usage low.

// /upload handler (multiparty)
app.post("/upload", (req, res) => {
  const form = new multiparty.Form();
  form.parse(req, async (err, fields, files) => {
    if (err) return res.status(400).json({ status: false, message: "分片上传失败" });
    const fileHash = fields["filehash"][0];
    const chunkHash = fields["chunkhash"][0];
    const chunkFile = files["chunk"][0];
    const chunkDir = path.resolve(UPLOAD_DIR, fileHash);
    await fse.ensureDir(chunkDir);
    const targetPath = path.resolve(chunkDir, chunkHash);
    await fse.move(chunkFile.path, targetPath);
    res.json({ status: true, message: "分片上传成功" });
  });
});

// /merge handler (stream merging)
app.post("/merge", async (req, res) => {
  const { fileHash, fileName, size: CHUNK_SIZE } = req.body;
  const completeFilePath = path.resolve(UPLOAD_DIR, `${fileHash}${extractExt(fileName)}`);
  const chunkDir = path.resolve(UPLOAD_DIR, fileHash);
  if (!fse.existsSync(chunkDir)) return res.status(400).json({ status: false, message: "分片目录不存在" });
  const chunkPaths = await fse.readdir(chunkDir);
  chunkPaths.sort((a, b) => parseInt(a.split("-")[1]) - parseInt(b.split("-")[1]));
  const mergePromises = chunkPaths.map((chunkName, index) => new Promise((resolve) => {
    const chunkPath = path.resolve(chunkDir, chunkName);
    const readStream = fse.createReadStream(chunkPath);
    const writeStream = fse.createWriteStream(completeFilePath, { start: index * CHUNK_SIZE, end: (index + 1) * CHUNK_SIZE });
    readStream.on("end", async () => { await fse.unlink(chunkPath); resolve(); });
    readStream.pipe(writeStream);
  }));
  await Promise.all(mergePromises);
  await fse.remove(chunkDir);
  res.json({ status: true, message: "文件合并成功" });
});

Conclusion

The complete solution consists of four core stages—chunking, verification, concurrent uploading, and server‑side merging—each addressing a specific pain point of large file uploads. The provided Vue + Express code can serve as a solid foundation for production projects, with further extensions such as file‑size limits, format validation, and monitoring as needed.

VueFile UploadConcurrency ControlExpressChunkingresumable upload
Rare Earth Juejin Tech Community
Written by

Rare Earth Juejin Tech Community

Juejin, a tech community that helps developers grow.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.