Backend Development 10 min read

Efficient Large File Upload and Database Insertion with Spring Boot, Vue, and Multithreading

This article demonstrates how to implement a high‑performance file upload system using Vue's Element Plus component on the frontend and Spring Boot on the backend, comparing single‑row inserts, batch inserts, and a multithreaded producer‑consumer approach to dramatically reduce MySQL insertion time.

Selected Java Interview Questions
Selected Java Interview Questions
Selected Java Interview Questions
Efficient Large File Upload and Database Insertion with Spring Boot, Vue, and Multithreading

In many projects the need to export large files from the frontend is common, but importing massive files into a backend database efficiently is less explored; this guide focuses on improving that process from the application layer.

Frontend : The author uses a Vue project with Element Plus's el-upload component to handle file selection and drag‑and‑drop upload.

<template>
  <el-upload class="upload-demo" drag action="http://localhost:8080/cartoon_web/upload" multiple>
    <Icon name="el-icon-UploadFilled" size="48"></Icon>
    <div class="el-upload__text">Drop file here or <em>click to upload</em></div>
    <template #tip>
      <div class="el-upload__tip">jpg/png files with a size less than 500kb</div>
    </template>
  </el-upload>
</template>

Backend Service : A Spring Boot controller receives the multipart file, reads it line‑by‑line, and inserts data into MySQL. The basic controller skeleton is shown below.

@PostMapping("/upload")
@CrossOrigin(origins = "*", maxAge = 3600)
public @ResponseBody Map<String, String> upload(MultipartFile file) throws IOException {
    System.out.println(file.getName());
    BufferedReader br = new BufferedReader(new InputStreamReader(file.getInputStream()));
    String line = null;
    long start = System.currentTimeMillis();
    while ((line = br.readLine()) != null) {
        // implement insertion logic
    }
    long end = System.currentTimeMillis();
    System.out.println("插入数据库共用时间:" + (end - start) / 1000 + "s");
    Map
res = new HashMap();
    res.put("1", "success");
    return res;
}

The article then evaluates three different insertion strategies.

Solution 1 – Single Row Insert

Each line is inserted individually, resulting in a total time of 1833 seconds for the test data.

@Insert("insert into test values(#{a}, #{b}, #{c}, #{d})")
void insertFile(Test test);

Solution 2 – Batch Insert

Using MyBatis <script> and a List<Test> , 100 rows are inserted per batch, reducing the total time to 82 seconds .

@Insert("
")
void insertBatch(List<Test> list);

List<Test> list = new ArrayList();
while ((line = br.readLine()) != null) {
    String[] lines = line.split(",");
    Test test = new Test(lines[0], lines[1], lines[2], lines[3]);
    list.add(test);
    count++;
    if (count % 100 == 0) {
        fileUploadService.insertBatch(list);
        list.clear();
    }
}

Solution 3 – Multithreaded Producer/Consumer

Building on the batch approach, the author introduces a producer‑consumer model using ConcurrentLinkedQueue , AtomicInteger , and CountDownLatch to parallelise insertion. Six consumer threads are started after 500 lines are queued.

ConcurrentLinkedQueue<String> queue = new ConcurrentLinkedQueue();
boolean[] isComplete = {true};
int[] count = {0};
CountDownLatch countDownLatch = new CountDownLatch(6);
AtomicInteger atomicSize = new AtomicInteger(0);
while ((line = br.readLine()) != null) {
    queue.add(line);
    count[0]++;
    if (count[0] == 500) {
        for (int i = 0; i < 6; i++) {
            new Thread(new Runnable() {
                @Override
                public void run() {
                    int num = 0;
                    List<Test> list = new ArrayList();
                    while (isComplete[0] && count[0] != atomicSize.get()) {
                        String line = queue.poll();
                        if (line != null) {
                            String[] lines = line.split(",");
                            Test test = new Test(lines[0], lines[1], lines[2], lines[3]);
                            atomicSize.incrementAndGet();
                            list.add(test);
                            num++;
                        }
                        if (num % 100 == 0) {
                            fileUploadService.insertBatch(list);
                            list.clear();
                        }
                    }
                    countDownLatch.countDown();
                }
            }).start();
        }
    }
}
isComplete[0] = true;
countDownLatch.await();

Performance results: 3 threads – 72 s, 6 threads – 60 s, 100 000 rows – 41 s. The author notes the importance of proper termination detection (using AtomicInteger and a boolean flag) and accurate timing (waiting on CountDownLatch ).

Problem Summary : Spring Boot’s default upload size limit caused a “request size exceeds the configured maximum” error. The fix is to adjust spring.servlet.multipart.max-file-size and spring.servlet.multipart.max-request-size in application.properties .

Conclusion : Batch insertion and multithreading dramatically improve large‑scale data import performance; for very large datasets, command‑line tools such as LOAD DATA INFILE are recommended.

performanceSpring BootVueMySQLFile Uploadmultithreadingbatch insert
Selected Java Interview Questions
Written by

Selected Java Interview Questions

A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.