Front‑end Container Deployment: Lessons Learned, Optimizations, and Multi‑stage Build
This article details a front‑end engineer's journey of containerizing a static‑site project with Docker and Kubernetes, addressing node_modules caching, oversized images, and complex deployment workflows, and presents three optimizations—caching dependencies, shrinking base images, and using multi‑stage builds—that cut image size to 25 MB and halve CI build time.
The author, a front‑end developer and W3C performance group member, describes the initial containerized deployment of a front‑end project using two Docker images (Node for building and Nginx for serving) running in a single Kubernetes pod with a shared volume for static assets.
Problems encountered included:
Repeated npm install because node_modules were not cached.
Large image sizes (each >2 GB) due to heavyweight base images.
Multiple images and a cumbersome release process that required manual steps and kept the Node container alive unnecessarily.
Optimization 1 – Cache node_modules
Node modules are baked into a custom base image that is rebuilt only when dependencies change. Git hooks (using npm version with version and postversion scripts) automatically detect changes, rebuild the base image, push it, and trigger CI.
{
"scripts": {
"version": "sh auto-build-node-base-image.sh",
"postversion": "git push && git push --tags"
}
}Optimization 2 – Reduce image size
The heavyweight CentOS base images are replaced with Alpine Linux, and Yarn is installed via RUN apk add --no-cache yarn . This shrinks the Node image from 2.53 GB to 668 MB, and further removal of node_modules after building reduces it to ~25 MB.
Optimization 3 – Multi‑stage build
A Docker multi‑stage build is used: the first stage compiles the static assets, and the second stage copies only the built dist folder and Nginx configuration into a minimal Alpine image. The final pod runs a single Nginx container, eliminating the need for a persistent Node container.
Results after the three optimizations:
Image size reduced to ~25 MB (down from >2 GB per image).
CI build time dropped from ~400 s to ~170 s on average.
Deployment simplified to a single container per pod with no shared volume.
The article concludes that these changes provide a practical reference for teams looking to streamline front‑end container deployments.
360 Tech Engineering
Official tech channel of 360, building the most professional technology aggregation platform for the brand.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.