Frontend Development 10 min read

Frontend and Backend Interaction Methods for File Upload, Object APIs, and Large File Handling

This article explains how the frontend can upload files to the backend using base64 or binary Blob transmission, introduces key JavaScript objects such as files, Blob, FormData, and FileReader, and provides practical code examples for validation, preview, chunked uploading, progress tracking, and resumable uploads with hash verification.

Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Frontend and Backend Interaction Methods for File Upload, Object APIs, and Large File Handling

When uploading files from the frontend, two main interaction methods are used: transmitting the file as a Base64‑encoded string or sending the binary Blob directly. Base64 encodes binary data using 64 printable characters, expanding the payload by roughly one‑third, while Blob transmission via FormData keeps the original size.

The primary JavaScript objects involved in file operations are:

files : obtained from an <input type="file"> element, representing a File array (each File inherits from Blob ).

blob : an immutable binary container with various utility methods.

formData : a key‑value pair object used to send data, including binary, to the server.

fileReader : reads a Blob / File and can output Base64, text, or Data URL formats.

Example – Basic file selection and event handling:

<input type="file" id="uploader">
<script>
  const uploader = document.querySelector('#uploader');
  uploader.addEventListener('change', (event) => {
    const file = event.target.files[0];
    console.log('File change event', file);
  });
</script>

The file object provides size, type, name, and lastModified information, allowing developers to enforce validation rules such as maximum size (e.g., 10 MB) and allowed MIME types (e.g., JPEG/PNG):

uploader.addEventListener('change', (event) => {
  const file = event.target.files[0];
  if (file.size > 10 * 1024 * 1024) {
    return window.alert('File size cannot exceed 10 MB');
  }
  if (!['image/jpeg', 'image/png'].includes(file.type)) {
    return window.alert('Only PNG/JPEG images are allowed');
  }
});

Image preview using FileReader :

const preview = document.querySelector('#preview');
function readFilePreview(file) {
  const fileReader = new FileReader();
  fileReader.onload = (event) => {
    console.log('Read complete', event.target.result);
    preview.src = event.target.result;
  };
  fileReader.readAsDataURL(file);
}

uploader.addEventListener('change', (event) => {
  const file = event.target.files[0];
  // validation omitted for brevity
  readFilePreview(file);
});

To send the selected file to the server, create a FormData instance and post it with axios (or any HTTP client) using the multipart/form-data content type:

const formData = new FormData();
formData.append('file', file);
axios.post('http://localhost:3002/upload', formData, {
  headers: { 'Content-Type': 'multipart/form-data' }
}).then(res => {
  console.log('Response', res);
});

When dealing with large files, uploading the whole file at once can cause long request times, exceed server limits, and make resumability difficult. Chunked (slice) uploading solves these problems. The following utility splits a file into fixed‑size chunks:

function fileToChunks(file, chunkSize = 10 * 1024 * 1024) {
  const fileSize = file.size;
  const chunks = [];
  let current = 0;
  while (current < fileSize) {
    chunks.push(file.slice(current, current + chunkSize));
    current += chunkSize;
  }
  return chunks;
}

Upload progress can be calculated by comparing the number of successfully uploaded chunks to the total number of chunks.

For resumable uploads, each file needs a unique identifier. Computing a hash (e.g., MD5) of the file content provides such an identifier. The example below uses the spark-md5 library to compute a hash incrementally over the chunks:

function getFileChunksHash(chunks) {
  return new Promise((resolve) => {
    const spark = new SparkMD5.ArrayBuffer();
    const fileReader = new FileReader();
    let index = 0;
    fileReader.onload = (event) => {
      spark.append(event.target.result);
      index++;
      if (index === chunks.length) {
        return resolve(spark.end());
      }
      _read();
    };
    function _read() {
      fileReader.readAsArrayBuffer(chunks[index]);
    }
    _read();
  });
}

Because hash calculation is CPU‑intensive, it can be offloaded to a Web Worker for better UI responsiveness.

Some platforms (e.g., Bilibili) further optimize large uploads by first hashing a large slice, then subdividing that slice into smaller chunks for actual transmission, reducing redundant data transfer.

Finally, the concept of “instant upload” (秒传) checks the server for an existing file with the same hash; if found, the server skips the actual upload and immediately returns a success response, giving the user the perception of a near‑instant upload.

frontendFile UploadhashblobChunked UploadFormDataBase64FileReader
Rare Earth Juejin Tech Community
Written by

Rare Earth Juejin Tech Community

Juejin, a tech community that helps developers grow.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.