Implementing a Large File Chunked Upload Library: A Full-Stack TypeScript Guide
This article provides a comprehensive guide to building a large file chunked upload library from scratch using TypeScript, detailing both server-side stream processing for memory efficiency and client-side MD5 calculation with retry mechanisms to ensure reliable and performant file transfers.
This article details the development of a custom large file chunked upload library, addressing performance bottlenecks and memory constraints associated with direct file transfers. The proposed architecture divides the process into frontend MD5 calculation, initialization, chunk transmission, and backend merging, emphasizing the use of streams to prevent memory exhaustion.
The technical workflow involves four main steps: calculating the file's MD5 hash on the client for integrity verification and instant upload support; initializing the upload session on the server; transmitting individual chunks with retry logic; and finally, merging the chunks on the server while verifying the final MD5. Crucially, stream processing is utilized during MD5 calculation, chunking, and merging to avoid high memory consumption.
On the server side, the FileUploaderServer class provides core capabilities including initialization, chunk reception, merging, status listing, and cleanup. The implementation leverages Node.js streams and the MultiStream library to pipe chunked data efficiently into a single output file.
interface IFileUploaderOptions { tempFileLocation: string; mergedFileLocation: string; } class FileUploaderServer { public async initFilePartUpload(fileName: string): Promise<string> {} public async uploadPartFile(uploadId: string, partIndex: number, partFile: Buffer): Promise<string> {} public async listUploadedPartFile(uploadId: string): Promise<IUploadPartInfo[]> {} async cancelFilePartUpload(uploadId: string, deleteFolder: boolean = false): Promise<void> {} async finishFilePartUpload(uploadId: string, fileName: string, md5: string): Promise<IMergedFileInfo> {} }The client-side FileUploaderClient class manages the upload lifecycle without hardcoding HTTP requests, allowing developers to inject their own API functions. It handles file chunking via the browser's FileReader and Blob.slice APIs, computes MD5 hashes incrementally using spark-md5, and orchestrates the upload sequence with automatic retry mechanisms for failed chunks.
export class FileUploaderClient { fileUploaderClientOptions: IFileUploaderClientOptions; constructor(options: IFileUploaderClientOptions) { this.fileUploaderClientOptions = Object.assign(DEFAULT_OPTIONS, options); } public async getChunkListAndFileMd5(file: File): Promise<{ md5: string; chunkList: Blob[] }> { ... } public async uploadFile(file: File): Promise<any> { ... } }Finally, the library is demonstrated in a practical full-stack setup using Koa for the backend and React for the frontend. The integration showcases how to configure routes, handle multipart form data, and trigger the upload process, resulting in a reliable and memory-efficient large file transfer system.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
ByteFE
Cutting‑edge tech, article sharing, and practical insights from the ByteDance frontend team.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
