How to Implement Efficient Large File Chunked Download with Java and MinIO
Learn how to download massive files—such as multi‑gigabyte videos—by splitting them into chunks using HTTP Range headers, implementing a Java Spring backend with MinIO storage, multi‑threaded retrieval, and seamless merging, while also handling breakpoint resume and Swagger documentation.
Large file download can exhaust memory if the whole file is loaded at once.
Using the HTTP Range header, a file can be divided into smaller parts, downloaded separately, and then merged.
Understanding the Range Header
The Range request header tells the server which byte range to return. The server responds with status 206 (Partial Content) for valid ranges or 416 if the range is unsatisfiable.
Chunked Download Workflow
1. Probe the file on MinIO to obtain size and metadata.
2. Determine chunk size and spawn threads equal to the number of chunks.
3. Each thread sends a request with a Range header to download its assigned part.
4. After all parts are downloaded, merge them sequentially and delete temporary files.
Key Code Snippets
<code>@GetMapping("/singlePartFileDownload")
public void singlePartFileDownload(@RequestParam("fileName") String fileName,
HttpServletRequest request,
HttpServletResponse response) throws Exception {
// obtain file info from MinIO
StatObjectResponse stat = minioClient.statObject(
StatObjectArgs.builder()
.bucket("longxia")
.object(fileName)
.build());
long fSize = stat.size();
response.setContentType("application/octet-stream");
// handle Range header
String range = request.getHeader("Range");
long start = 0, end = fSize - 1;
if (range != null) {
response.setStatus(HttpServletResponse.SC_PARTIAL_CONTENT);
String[] parts = range.replace("bytes=", "").split("-");
start = Long.parseLong(parts[0].trim());
if (parts.length == 2) {
end = Long.parseLong(parts[1].trim());
if (end >= fSize) end = fSize - 1;
}
}
long length = end - start + 1;
response.setHeader("Content-Range", "bytes " + start + "-" + end + "/" + fSize);
response.setHeader("Content-Length", String.valueOf(length));
try (GetObjectResponse stream = minioClient.getObject(
GetObjectArgs.builder()
.bucket(stat.bucket())
.object(stat.object())
.offset(start)
.length(length)
.build());
BufferedOutputStream out = new BufferedOutputStream(response.getOutputStream())) {
byte[] buffer = new byte[1024];
int bytesRead;
while ((bytesRead = stream.read(buffer)) != -1) {
out.write(buffer, 0, bytesRead);
}
out.flush();
}
}
</code> <code>private FileInfo download(long start, long end, long page, String fName) throws Exception {
File file = new File(down_path, page + "-" + fName);
if (file.exists() && page != -1 && file.length() == per_page) {
return null;
}
HttpGet httpGet = new HttpGet("http://127.0.0.1:8080/download2/singlePartFileDownload?fileName=" + fName);
httpGet.setHeader("Range", "bytes=" + start + "-" + end);
try (CloseableHttpResponse resp = client.execute(httpGet);
InputStream is = resp.getEntity().getContent();
FileOutputStream fos = new FileOutputStream(file)) {
byte[] buf = new byte[1024];
int len;
while ((len = is.read(buf)) != -1) {
fos.write(buf, 0, len);
}
}
// merge if last part
if (end - Long.parseLong(resp.getFirstHeader("fSize").getValue()) > 0) {
mergeAllPartFile(fName, page);
}
return new FileInfo(Long.parseLong(resp.getFirstHeader("fSize").getValue()), fName);
}
</code>Multi‑threaded execution is triggered by a controller endpoint that creates a thread pool and submits a Download runnable for each chunk.
<code>@GetMapping("/fenPianDownloadFile")
public String fenPianDownloadFile(@RequestParam("fileName") String fileName) throws Exception {
FileInfo fileInfo = download(0, 10, -1, fileName);
if (fileInfo != null) {
long pages = fileInfo.fSize / per_page;
for (int i = 0; i <= pages; i++) {
pool.submit(new Download(i * per_page,
(i + 1) * per_page - 1,
i, fileInfo.fName));
}
}
return "success";
}
</code>After all parts are saved, mergeAllPartFile concatenates them in order and removes temporary files.
<code>private void mergeAllPartFile(String fName, long page) throws Exception {
File target = new File(down_path, fName);
try (BufferedOutputStream os = new BufferedOutputStream(new FileOutputStream(target))) {
for (int i = 0; i <= page; i++) {
File part = new File(down_path, i + "-" + fName);
while (!part.exists() || (i != page && part.length() < per_page)) {
Thread.sleep(1000);
}
os.write(FileUtils.readFileToByteArray(part));
part.delete();
}
}
}
</code>Swagger configuration adds the Range header to the API documentation, allowing interactive testing of the chunked download.
Finally, the article demonstrates successful chunked and breakpoint downloads, showing screenshots of the merged file and handling of errors during merging.
Lobster Programming
Sharing insights on technical analysis and exchange, making life better through technology.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.