Mastering Large File Uploads with Spring Boot: Chunked Upload Guide
This article explains why traditional single‑file uploads fail for large files, outlines the benefits of chunked uploading, and provides a complete Spring Boot implementation—including backend controllers, high‑performance merge logic, Vue front‑end code, enterprise‑grade optimizations, performance test results, and best‑practice recommendations.
In internet applications, uploading large files is a common yet tricky challenge; traditional single‑file uploads often encounter timeouts and memory overflow when files exceed 100 MB.
1. Why chunked upload is needed?
Unstable network transmission: Long single requests are prone to interruption.
Server resource exhaustion: Loading a whole file at once can cause memory overflow.
High cost of upload failure: The entire file must be re‑uploaded.
Advantages of chunked upload
Reduces load per request.
Supports resumable uploads.
Enables concurrent uploading for higher efficiency.
Lowers server memory pressure.
2. Core principle of chunked upload
3. Spring Boot implementation
3.1 Core dependencies
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.11.0</version>
</dependency>
</dependencies>3.2 Controller implementation
@RestController
@RequestMapping("/upload")
public class ChunkUploadController {
private final String CHUNK_DIR = "uploads/chunks/";
private final String FINAL_DIR = "uploads/final/";
/** Initialize upload */
@PostMapping("/init")
public ResponseEntity<String> initUpload(@RequestParam String fileName,
@RequestParam String fileMd5) {
String uploadId = UUID.randomUUID().toString();
Path chunkDir = Paths.get(CHUNK_DIR, fileMd5 + "_" + uploadId);
try { Files.createDirectories(chunkDir); }
catch (IOException e) { return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR)
.body("创建目录失败"); }
return ResponseEntity.ok(uploadId);
}
/** Upload a chunk */
@PostMapping("/chunk")
public ResponseEntity<String> uploadChunk(@RequestParam MultipartFile chunk,
@RequestParam String uploadId,
@RequestParam String fileMd5,
@RequestParam Integer index) {
String chunkName = "chunk_" + index + ".tmp";
Path filePath = Paths.get(CHUNK_DIR, fileMd5 + "_" + uploadId, chunkName);
try { chunk.transferTo(filePath); return ResponseEntity.ok("分块上传成功"); }
catch (IOException e) { return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR)
.body("分块保存失败"); }
}
/** Merge chunks */
@PostMapping("/merge")
public ResponseEntity<String> mergeChunks(@RequestParam String fileName,
@RequestParam String uploadId,
@RequestParam String fileMd5) {
File chunkDir = new File(CHUNK_DIR + fileMd5 + "_" + uploadId);
File[] chunks = chunkDir.listFiles();
if (chunks == null || chunks.length == 0) {
return ResponseEntity.badRequest().body("无分块文件");
}
Arrays.sort(chunks, Comparator.comparingInt(f ->
Integer.parseInt(f.getName().split("_")[1].split("\\.")[0])));
Path finalPath = Paths.get(FINAL_DIR, fileName);
try (BufferedOutputStream outputStream = new BufferedOutputStream(Files.newOutputStream(finalPath))) {
for (File chunkFile : chunks) { Files.copy(chunkFile.toPath(), outputStream); }
FileUtils.deleteDirectory(chunkDir);
return ResponseEntity.ok("文件合并成功:" + finalPath);
} catch (IOException e) {
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR)
.body("合并失败:" + e.getMessage());
}
}
}3.3 High‑performance merge optimization
// Use RandomAccessFile for better performance
public void mergeFiles(File targetFile, List<File> chunkFiles) throws IOException {
try (RandomAccessFile target = new RandomAccessFile(targetFile, "rw")) {
byte[] buffer = new byte[1024 * 8]; // 8KB buffer
long position = 0;
for (File chunk : chunkFiles) {
try (RandomAccessFile src = new RandomAccessFile(chunk, "r")) {
int bytesRead;
while ((bytesRead = src.read(buffer)) != -1) {
target.write(buffer, 0, bytesRead);
}
position += chunk.length();
}
}
}
}4. Front‑end implementation (Vue example)
4.1 Chunk processing function
// 5MB chunk size
const CHUNK_SIZE = 5 * 1024 * 1024;
/** Process file into chunks */
function processFile(file) {
const chunkCount = Math.ceil(file.size / CHUNK_SIZE);
const chunks = [];
for (let i = 0; i < chunkCount; i++) {
const start = i * CHUNK_SIZE;
const end = Math.min(file.size, start + CHUNK_SIZE);
chunks.push(file.slice(start, end));
}
return chunks;
}4.2 Upload logic with progress display
async function uploadFile(file) {
// 1. Initialize upload
const { data: uploadId } = await axios.post('/upload/init', {
fileName: file.name,
fileMd5: await calculateFileMD5(file) // MD5 calculation
});
// 2. Upload chunks
const chunks = processFile(file);
const total = chunks.length;
let uploaded = 0;
await Promise.all(chunks.map((chunk, index) => {
const formData = new FormData();
formData.append('chunk', chunk, `chunk_${index}`);
formData.append('index', index);
formData.append('uploadId', uploadId);
formData.append('fileMd5', fileMd5);
return axios.post('/upload/chunk', formData, {
headers: { 'Content-Type': 'multipart/form-data' },
onUploadProgress: progress => {
const percent = ((uploaded * 100) / total).toFixed(1);
updateProgress(percent);
}
}).then(() => uploaded++);
}));
// 3. Trigger merge
const result = await axios.post('/upload/merge', {
fileName: file.name,
uploadId,
fileMd5
});
alert(`上传成功: ${result.data}`);
}5. Enterprise‑level optimization
5.1 Resume upload check
@GetMapping("/check/{fileMd5}/{uploadId}")
public ResponseEntity<List<Integer>> getUploadedChunks(@PathVariable String fileMd5,
@PathVariable String uploadId) {
Path chunkDir = Paths.get(CHUNK_DIR, fileMd5 + "_" + uploadId);
if (!Files.exists(chunkDir)) {
return ResponseEntity.ok(Collections.emptyList());
}
try {
List<Integer> uploaded = Files.list(chunkDir)
.map(p -> p.getFileName().toString())
.filter(name -> name.startsWith("chunk_"))
.map(name -> name.replace("chunk_", "").replace(".tmp", ""))
.map(Integer::parseInt)
.collect(Collectors.toList());
return ResponseEntity.ok(uploaded);
} catch (IOException e) {
return ResponseEntity.status(500).body(Collections.emptyList());
}
}5.2 Chunk security verification
@PostMapping("/chunk")
public ResponseEntity<?> uploadChunk(@RequestParam MultipartFile chunk,
@RequestParam String sign) { // signature from front‑end
String secretKey = "your-secret-key";
String serverSign = HmacUtils.hmacSha256Hex(secretKey, chunk.getBytes());
if (!serverSign.equals(sign)) {
return ResponseEntity.status(403).body("签名验证失败");
}
// process chunk ...
return ResponseEntity.ok("分块上传成功");
}5.3 Cloud storage integration (MinIO example)
@Configuration
public class MinioConfig {
@Bean
public MinioClient minioClient() {
return MinioClient.builder()
.endpoint("http://minio:9000")
.credentials("minio-access", "minio-secret")
.build();
}
}
@Service
public class MinioUploadService {
@Autowired
private MinioClient minioClient;
public void uploadChunk(String bucket, String object, InputStream chunkStream, long length) throws Exception {
minioClient.putObject(PutObjectArgs.builder()
.bucket(bucket)
.object(object)
.stream(chunkStream, length, -1)
.build());
}
}6. Performance test comparison
Using a 10 GB file, the results were:
Traditional upload: >3 hours, >10 GB memory, 100 % retry cost.
Chunked upload (single thread): 1.5 hours, ~100 MB memory, ~10 % retry cost.
Chunked upload (multi‑thread): 20 minutes, ~100 MB memory, <1 % retry cost.
7. Best practice recommendations
Chunk size selection
Intranet: 10 MB‑20 MB
Mobile network: 1 MB‑5 MB
WAN: 500 KB‑1 MB
Scheduled cleanup strategy
@Scheduled(fixedRate = 24 * 60 * 60 * 1000) // daily cleanup
public void cleanTempFiles() {
File tempDir = new File(CHUNK_DIR);
// delete directories older than 24 hours
FileUtils.deleteDirectory(tempDir);
}Rate limiting configuration
spring:
servlet:
multipart:
max-file-size: 100MB # per chunk limit
max-request-size: 100MBConclusion
Spring Boot chunked upload resolves the core pain points of large‑file transmission; combined with resumable upload, chunk verification, and security controls, it enables a robust enterprise‑grade file transfer solution. The provided code can be integrated directly into production, with chunk size and concurrency tuned to specific requirements.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Code Ape Tech Column
Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
