Building a Large File Chunked Upload Library in TypeScript (easy-file-uploader)
This article explains the technical details of large file chunked uploading, walks through designing a custom TypeScript library for both server and client sides, provides step‑by‑step implementation code, and demonstrates a complete Koa‑React demo, offering developers a ready‑to‑use solution for efficient large file transfers.
1. Introduction
When dealing with large files in a business scenario, uploading the whole file directly to the server can severely impact server performance and result in slow upload speeds. Chunked uploading solves this problem by splitting the file into smaller parts, uploading them in parallel, and minimizing memory consumption.
2. Chunked Upload Technical Scheme
The typical workflow consists of four steps:
Calculate the file MD5 on the front‑end (used for integrity verification and instant upload).
Send an initialization request to the back‑end, which creates a temporary directory based on the MD5.
Split the file into chunks and upload each chunk to the server.
Send a finish request; the back‑end merges the chunks, verifies the final MD5, and stores the file.
Because MD5 calculation, chunking, and merging can be memory‑intensive for large files, streams are used to keep memory usage low.
3. Server Implementation (easy-file-uploader‑server)
The core of the server side is the FileUploaderServer class, which provides the basic capabilities required by the workflow.
interface IFileUploaderOptions {
tempFileLocation: string; // directory for chunks
mergedFileLocation: string; // directory for merged files
}
class FileUploaderServer {
private fileUploaderOptions: IFileUploaderOptions;
constructor(options: IFileUploaderOptions) {
this.fileUploaderOptions = Object.assign(DEFAULT_OPTIONS, options);
}
/** Initialize upload – create a folder named by MD5 */
public async initFilePartUpload(fileName: string): Promise
{
const { tempFileLocation } = this.fileUploaderOptions;
await fse.ensureDir(tempFileLocation);
const uploadId = calculateMd5(`${fileName}-${Date.now()}`);
const uploadFolderPath = path.join(tempFileLocation, uploadId);
if (fse.existsSync(uploadFolderPath)) {
throw new FolderExistException('found same upload folder, maybe hash collision');
}
await fse.mkdir(uploadFolderPath);
return uploadId;
}
/** Upload a single chunk */
public async uploadPartFile(uploadId: string, partIndex: number, partFile: Buffer): Promise
{
const { tempFileLocation } = this.fileUploaderOptions;
const uploadFolderPath = path.join(tempFileLocation, uploadId);
if (!fse.existsSync(uploadFolderPath)) {
throw new NotFoundException('not found upload folder');
}
const partFileMd5 = calculateMd5(partFile);
const partFileLocation = path.join(uploadFolderPath, `${partIndex}|${partFileMd5}.part`);
await fse.writeFile(partFileLocation, partFile);
return partFileMd5;
}
/** List uploaded chunks */
public async listUploadedPartFile(uploadId: string): Promise
{
const { tempFileLocation } = this.fileUploaderOptions;
const uploadFolderPath = path.join(tempFileLocation, uploadId);
if (!fse.existsSync(uploadFolderPath)) {
throw new NotFoundException('not found upload folder');
}
const dirList = await listDir(uploadFolderPath);
return dirList.map(item => {
const [index, md5] = item.name.replace(/\.part$/, '').split('|');
return { path: item.path, index: parseInt(index), md5 };
});
}
/** Cancel upload – hard or soft delete */
public async cancelFilePartUpload(uploadId: string, deleteFolder = false): Promise
{
const { tempFileLocation } = this.fileUploaderOptions;
const uploadFolderPath = path.join(tempFileLocation, uploadId);
if (!fse.existsSync(uploadFolderPath)) {
throw new NotFoundException('not found upload folder');
}
if (deleteFolder) {
await fse.remove(uploadFolderPath);
} else {
await fse.rename(uploadFolderPath, `${uploadFolderPath}[removed]`);
}
}
/** Finish upload – merge chunks and verify MD5 */
public async finishFilePartUpload(uploadId: string, fileName: string, md5: string): Promise
{
const { mergedFileLocation, tempFileLocation } = this.fileUploaderOptions;
const uploadFolderPath = path.join(tempFileLocation, uploadId);
if (!fse.existsSync(uploadFolderPath)) {
throw new NotFoundException('not found upload folder');
}
const dirList = await listDir(uploadFolderPath);
const files = dirList.filter(item => item.path.endsWith('.part'));
const mergedFileDir = path.join(mergedFileLocation, md5);
await fse.ensureDir(mergedFileDir);
const mergedFilePath = path.join(mergedFileDir, fileName);
await mergePartFile(files, mergedFilePath);
await wait(1000); // ensure file is written before MD5 check
const mergedFileMd5 = await calculateFileMd5(mergedFilePath);
if (mergedFileMd5 !== md5) {
throw new Md5Exception('md5 checked failed');
}
return { path: mergedFilePath, md5 };
}
}The server uses fs-extra for file operations, streams for merging, and custom exceptions for error handling.
4. Client Implementation (easy-file-uploader‑client)
The front‑end side provides FileUploaderClient , which handles MD5 calculation, chunk creation, and orchestrates the upload workflow using user‑provided request functions.
class FileUploaderClient {
private options: IFileUploaderClientOptions;
constructor(options: IFileUploaderClientOptions) {
this.options = Object.assign(DEFAULT_OPTIONS, options);
}
/** Split file into chunks and compute MD5 */
public async getChunkListAndFileMd5(file: File): Promise<{ md5: string; chunkList: Blob[] }> {
const chunkSize = this.options.chunkSize;
const chunks = Math.ceil(file.size / chunkSize);
const spark = new SparkMD5.ArrayBuffer();
const fileReader = new FileReader();
const blobSlice = getBlobSlice();
const chunkList: Blob[] = [];
let current = 0;
return new Promise((resolve, reject) => {
fileReader.onload = e => {
if (e?.target?.result instanceof ArrayBuffer) {
spark.append(e.target.result);
}
current++;
if (current < chunks) {
loadNext();
} else {
resolve({ md5: spark.end(), chunkList });
}
};
fileReader.onerror = e => reject(e);
const loadNext = () => {
const start = current * chunkSize;
const end = Math.min(start + chunkSize, file.size);
const chunk = blobSlice.call(file, start, end);
chunkList.push(chunk);
fileReader.readAsArrayBuffer(chunk);
};
loadNext();
});
}
/** Orchestrate the whole upload process */
public async uploadFile(file: File): Promise
{
const request = this.options.requestOptions;
const { md5, chunkList } = await this.getChunkListAndFileMd5(file);
const retryList: number[] = [];
if (!request?.retryTimes || !request?.initFilePartUploadFunc || !request?.uploadPartFileFunc || !request?.finishFilePartUploadFunc) {
throw new Error('invalid request options');
}
await request.initFilePartUploadFunc();
for (let i = 0; i < chunkList.length; i++) {
try {
await request.uploadPartFileFunc(chunkList[i], i);
} catch {
retryList.push(i);
}
}
for (let r = 0; r < request.retryTimes; r++) {
if (!retryList.length) break;
for (let i = 0; i < retryList.length; i++) {
const idx = retryList[i];
try {
await request.uploadPartFileFunc(chunkList[idx], idx);
retryList.splice(i, 1);
i--;
} catch { /* keep in retry list */ }
}
}
if (retryList.length) {
throw new Error(`upload failed, chunks ${JSON.stringify(retryList)} not uploaded`);
}
return await request.finishFilePartUploadFunc(md5);
}
}Developers supply the actual HTTP calls (init, upload part, finish) via the requestOptions object, allowing the client to work with any back‑end framework.
5. Demo with Koa (server) and React (client)
A minimal Koa server creates an instance of FileUploaderServer and exposes three routes: /api/initUpload , /api/uploadPart , and /api/finishUpload . The React front‑end uses FileUploaderClient together with axios to call those routes, showing a complete end‑to‑end upload flow.
const app = new Koa();
const router = new KoaRouter();
const fileUploader = new FileUploaderServer({
tempFileLocation: path.join(__dirname, './public/tempUploadFile'),
mergedFileLocation: path.join(__dirname, './public/mergedUploadFile'),
});
router.post('/api/initUpload', async ctx => {
const { name } = ctx.request.body;
ctx.body = { uploadId: await fileUploader.initFilePartUpload(name) };
});
router.post('/api/uploadPart', upload.single('partFile'), async ctx => {
const { uploadId, partIndex } = ctx.request.body;
const partFileMd5 = await fileUploader.uploadPartFile(uploadId, Number(partIndex), ctx.file.buffer);
ctx.body = { partFileMd5 };
});
router.post('/api/finishUpload', async ctx => {
const { uploadId, name, md5 } = ctx.request.body;
const { path: filePath } = await fileUploader.finishFilePartUpload(uploadId, name, md5);
ctx.body = { path: filePath.split('/public/')[1] };
});
app.use(cors()).use(bodyParser()).use(router.routes()).listen(10001);The React component renders a file input and a button; when clicked it runs the client’s uploadFile method, displays the returned file URL, and verifies that the file is stored correctly on the server.
6. Recruitment Note
The article concludes with a brief invitation from an interactive technology team that combines game capabilities with advertising. It mentions open positions for front‑end, full‑stack, and other roles in Beijing, Shanghai, Hangzhou, and Shenzhen, encouraging interested engineers to contact via email.
ByteDance ADFE Team
Official account of ByteDance Advertising Frontend Team
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.