Optimizing Docker Build for Go Applications with Multi‑Stage Builds and Cache‑From
This article demonstrates how to shrink Go application Docker images and accelerate CI builds by applying multi‑stage builds, consolidating RUN commands, cleaning up layers, and leveraging Docker's cache‑from feature to reuse previously built layers in GitLab CI pipelines.
Modern CI services such as AWS CodeBuild, Google Cloud Build, and GitLab CI create a fresh build environment for each run, which improves isolation but can increase build time. Using a simple Go application as an example, the article walks through a series of Dockerfile optimizations.
Initial Dockerfile builds the binary inside a golang:latest image and produces a final image that still contains the entire Go toolchain and dependencies, making it unnecessarily large.
FROM golang:latest
WORKDIR /app
COPY . .
RUN export GOPROXY=https://goproxy.cn && go mod download
RUN go build -o main .
EXPOSE 8080
CMD ["./main"]To reduce the image size, a multi‑stage build is introduced. The first stage ( builder ) compiles the binary, and the second stage uses a minimal alpine:latest base image, copying only the compiled binary.
# builder stage
FROM golang:latest AS builder
WORKDIR /app
COPY . .
RUN export GOPROXY=https://goproxy.cn && go mod download
RUN go build -o main .
# production stage
FROM alpine:latest
WORKDIR /app
RUN apk update && apk --no-cache add ca-certificates
COPY --from=builder /app/main .
EXPOSE 8080
CMD ["./main"]Further size reduction is attempted by deleting the temporary /var/cache/apk directory, but because each RUN creates a separate read‑only layer, the deletion does not affect the earlier layer. Merging the installation and cleanup into a single RUN resolves this.
FROM alpine:latest
WORKDIR /app
RUN apk update && apk --no-cache add ca-certificates && rm -rf /var/cache/apk/*
COPY --from=builder /app/main .
EXPOSE 8080
CMD ["./main"]In CI environments, Docker cannot reuse layers because a fresh environment lacks previous images. The article introduces the --cache-from option, which tells Docker to use an existing image as a cache source. To make the cache effective, the Dockerfile is reordered so that the immutable dependency files ( go.mod and go.sum ) are copied before the rest of the source, allowing the dependency download layer to be cached across builds.
FROM golang:latest AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN export GOPROXY=https://goproxy.cn && go mod download
COPY . .
RUN go build -o main .
FROM alpine:latest
WORKDIR /app
RUN apk update && apk --no-cache add ca-certificates && rm -rf /var/cache/apk/*
COPY --from=builder /app/main .
EXPOSE 8080
CMD ["./main"]The build commands are adjusted to pull existing images (ignoring failures), build the builder image with --target builder and --cache-from , then build the production image using both the builder and production images as cache sources, and finally push the images back to the registry.
IMAGE=gitlab.com/go-docker:develop
BUILDER_IMAGE=gitlab.com/go-docker:builder
docker pull $IMAGE || true
docker pull $BUILDER_IMAGE || true
# Build builder image
docker build \
--target builder \
--cache-from $BUILDER_IMAGE \
-t $BUILDER_IMAGE .
# Build production image
docker build \
--cache-from $BUILDER_IMAGE \
--cache-from $IMAGE \
-t $IMAGE .
# Push images
docker push $BUILDER_IMAGE
docker push $IMAGEBy separating the builder and production images, using Alpine, merging RUN steps, and employing --cache-from , the build time can be reduced dramatically—often achieving two to three times faster builds—while keeping the final image minimal.
NetEase Game Operations Platform
The NetEase Game Automated Operations Platform delivers stable services for thousands of NetEase titles, focusing on efficient ops workflows, intelligent monitoring, and virtualization.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.