Operations 13 min read

How to Shrink Docker Images from 600 MB to 60 MB: Proven Optimization Techniques

This guide shows how to dramatically reduce Docker image sizes—from hundreds of megabytes to just a few—by selecting lightweight base images, using multi‑stage builds, optimizing layers, leveraging .dockerignore, applying BuildKit, and automating checks, with real‑world metrics demonstrating up to 90% size cuts and faster builds.

Raymond Ops
Raymond Ops
Raymond Ops
How to Shrink Docker Images from 600 MB to 60 MB: Proven Optimization Techniques

Introduction: A painful production incident

Six months ago an urgent release failed because a simple Node.js application image grew to 1.2 GB, taking 25 minutes to download over limited bandwidth during peak traffic, threatening revenue.

Image optimization is a survival skill, not a luxury.

Core Strategy 1 – Choose the right base image

1. Alpine Linux: Small and efficient

# Traditional approach – Ubuntu base image
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y nodejs npm
# Final image size: ~400 MB

# Optimized – Alpine base image
FROM node:16-alpine
# Final image size: ~110 MB

Practical tips:

Alpine images are typically >80% smaller than Ubuntu images.

Use the apk package manager for faster installations.

Be aware that some dependencies may require additional build tools.

2. Distroless: Ultra‑minimal

FROM gcr.io/distroless/nodejs:16
COPY app.js /
EXPOSE 3000
CMD ["app.js"]

Advantages:

No shell – very high security.

30‑50% smaller than Alpine.

Minimized attack surface.

Core Strategy 2 – Power of multi‑stage builds

This technique separates build‑time dependencies from the final runtime image.

# Stage 1: Build environment
FROM node:16-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

# Stage 2: Runtime environment
FROM node:16-alpine AS runtime
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
EXPOSE 3000
CMD ["node", "app.js"]

Result comparison:

# Single‑stage build
myapp   old   847 MB

# Multi‑stage build
myapp   new   156 MB

Size reduced by 81%.

Core Strategy 3 – The magic of .dockerignore

Excluding unnecessary files from the build context dramatically cuts transfer size and time.

# Version control
.git
.gitignore

# Documentation
README.md
docs/
*.md

# Development tools
.vscode/
.idea/

# Tests
test/
*.test.js
coverage/

# Dependency cache
node_modules/
npm-debug.log

# System files
.DS_Store
Thumbs.db

# Build artifacts (if built inside container)
dist/
build/

Performance gains:

Build context reduced by 70%.

Transfer time dropped from 45 s to 12 s.

Build speed increased by 40%.

Core Strategy 4 – Layer optimization

1. Merge RUN commands

# ❌ Bad practice – multiple layers
FROM alpine:3.14
RUN apk update
RUN apk add --no-cache nodejs
RUN apk add --no-cache npm
RUN rm -rf /var/cache/apk/*

# ✅ Good practice – single layer
FROM alpine:3.14
RUN apk update && \
    apk add --no-cache nodejs npm && \
    rm -rf /var/cache/apk/*

2. Cache‑friendly ordering

# Before – dependencies and code mixed
COPY . /app
RUN npm install

# After – dependencies first to leverage Docker cache
COPY package*.json /app/
RUN npm ci --only=production
COPY . /app

Measured impact:

Code changes: rebuild time fell from 8 minutes to 30 seconds.

Cache hit rate improved by 85%.

Core Strategy 5 – Runtime optimizations

1. Run as non‑root user

# Create non‑privileged user
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nextjs -u 1001
USER nextjs

2. Health‑check tuning

HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
    CMD curl -f http://localhost:3000/health || exit 1

Real‑world case study: Full optimization workflow

Original Dockerfile (problematic)

FROM ubuntu:20.04
RUN apt-get update && apt-get install -y nodejs npm python3 make g++
COPY . /app
WORKDIR /app
RUN npm install
EXPOSE 3000
CMD ["node", "server.js"]
# Image size: 1.2 GB
# Build time: 8 minutes

Optimized Dockerfile (efficient)

# Multi‑stage build
FROM node:16-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production --silent

FROM node:16-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --silent
COPY . .
RUN npm run build

FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nextjs -u 1001
COPY --from=deps --chown=nextjs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nextjs:nodejs /app/dist ./dist
USER nextjs
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=3s \
    CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1
CMD ["node", "dist/server.js"]

Optimization results:

Image size reduced from 1.2 GB to 145 MB (88% ↓).

Build time cut from 8 minutes to 2 minutes (75% ↓).

Startup time dropped from 25 seconds to 8 seconds (68% ↓).

Security vulnerabilities reduced from 47 to 3 (94% ↓).

Advanced tip – Enable BuildKit for faster builds

# Enable BuildKit
export DOCKER_BUILDKIT=1

# Use cache mount
FROM node:16-alpine
WORKDIR /app
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm \
    npm ci --only=production

Monitoring and metrics – Data‑driven optimization

Analyze images with dive

# Install dive
wget https://github.com/wagoodman/dive/releases/download/v0.10.0/dive_0.10.0_linux_amd64.deb
sudo dpkg -i dive_0.10.0_linux_amd64.deb

# Inspect image
 dive myapp:latest

Key metric commands

# Image size trend
docker images --format "table {{.Repository}}	{{.Tag}}	{{.Size}}	{{.CreatedAt}}"

# Record build time
time docker build -t myapp:latest .

Common pitfalls and how to avoid them

Pitfall 1 – Over‑optimizing breaks compatibility

# ❌ Problematic – scratch base
FROM scratch
COPY app /

# ✅ Safer – Alpine with certificates
FROM alpine:3.14
RUN apk add --no-cache ca-certificates
COPY app /

Pitfall 2 – Ignoring security scans

# Regular scan
docker scan myapp:latest

# Use Trivy for deeper analysis
trivy image myapp:latest

Pitfall 3 – Cache invalidation

# Keep low‑frequency operations early
COPY package*.json ./
RUN npm ci
# Code changes later – does not bust cache
COPY . .

Automated optimization in CI/CD

# .github/workflows/docker-optimize.yml
name: Docker Optimization Build
on:
  push:
    branches: [main]
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - name: Build optimized image
      run: |
        docker build \
          --build-arg BUILDKIT_INLINE_CACHE=1 \
          --cache-from myapp:latest \
          -t myapp:latest .
    - name: Image size check
      run: |
        SIZE=$(docker images myapp:latest --format "{{.Size}}")
        echo "Image size: $SIZE"
        # Fail if size exceeds 200 MB
        docker images myapp:latest --format "{{.Size}}" | grep -v GB || exit 1

Performance testing – Verify the gains

#!/bin/bash
# Image size comparison
docker images | grep myapp

# Startup time test
time docker run --rm myapp:latest echo "Container started"

# Memory usage snapshot
docker stats --no-stream --format "table {{.Name}}	{{.MemUsage}}	{{.CPUPerc}}"

Conclusion – Optimization is an art

Docker image optimization goes beyond mere technical tricks; it reflects deep understanding of system architecture. By applying the strategies above you can achieve:

70‑90% reduction in image size.

50‑80% faster builds.

60‑85% shorter deployment times.

80%+ decrease in security risk.

Long‑term benefits include lower storage and transfer costs, higher developer productivity, stronger system security, and an improved user experience.

Dockerimage-optimizationAlpinemulti-stage-buildBuildKitDistroless
Raymond Ops
Written by

Raymond Ops

Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.