Script vs Pre‑built Docker Images: Building Faster, More Reliable GitLab CI Pipelines
This article compares two GitLab CI strategies—installing tools on‑the‑fly in scripts versus using a pre‑built Docker image—detailing their implementations, pros and cons, and concluding that pre‑built images provide superior speed, consistency, and security for long‑term projects.
Introduction
One of the most frequent failures in GitLab CI pipelines is command not found, which occurs when required tools such as kustomize, kubectl, or helm are missing from the runner environment. Two main strategies exist for providing these tools: installing them dynamically in the CI script (the “script‑installation” approach) or building a custom Docker image that already contains the tools (the “pre‑built image” approach).
Script‑Installation Approach
This method keeps the runner’s base image minimal and installs the required binaries on demand in before_script or script sections.
Implementation example:
build:
stage: build
before_script:
- |
if ! command -v kustomize > /dev/null; then
echo "kustomize could not be found, installing..."
curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash
sudo mv kustomize /usr/local/bin/
fi
script:
- kustomize build .Pros:
High flexibility – each job can specify a different tool version, making it easy to test new releases.
Low runner maintenance – the runner remains generic and does not need project‑specific customization.
Cons:
Performance penalty – every job (including retries) must download and install the tools, adding noticeable latency.
Reliability risk – the pipeline depends on external networks (GitHub, package mirrors); outages cause pipeline failures.
Inconsistency – unless versions are explicitly pinned, different runs may use different tool versions, leading to flaky builds.
Script bloat – environment‑setup logic mixes with business logic, reducing readability.
Pre‑built Image Approach
This method treats the CI environment itself as code by creating a custom Docker image that bundles all required tools.
Implementation steps:
Create a Dockerfile.ci at the project root:
# Use an official base image
FROM alpine/k8s:1.23.6
# Set Kustomize version
ARG KUSTOMIZE_VERSION=v4.5.7
# Install curl for downloading
RUN apk add --no-cache curl
# Install Kustomize
RUN curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash && \
mv kustomize /usr/local/bin/
# Additional tools (e.g., AWS CLI, Helm) can be added here
# RUN ...Build the image and push it to a container registry (e.g., GitLab Container Registry) either manually or via a separate CI pipeline.
Reference the custom image in .gitlab-ci.yml :
# Set the custom image as the default for all jobs
default:
image: registry.gitlab.com/your-group/your-project/ci-env:latest
build:
stage: build
script:
- kustomize build .Pros:
Fast startup – the runner only needs to pull the pre‑built image, eliminating on‑the‑fly installation time.
High reliability and consistency – the environment is immutable and versioned; every run uses exactly the same tools.
Clean CI scripts – scripts contain only business logic.
Improved security – you control the source and versions of all installed tools and can scan the image for vulnerabilities.
Cons:
Additional maintenance – the Dockerfile must be kept up‑to‑date and the image rebuilt whenever a tool version changes.
Comparison and Conclusion
The two approaches differ mainly in execution speed, reliability, and maintenance overhead. The diagram below visualises the job flow for each method.
Conclusion: For long‑term, production‑grade projects, codifying the CI/CD environment in a custom Docker image (pre‑built image approach) provides superior speed, reliability, and consistency, outweighing the modest effort required to maintain the Dockerfile. The script‑installation method may be acceptable for quick prototypes or very small personal projects, but it becomes a bottleneck in collaborative pipelines.
Ops Development & AI Practice
DevSecOps engineer sharing experiences and insights on AI, Web3, and Claude code development. Aims to help solve technical challenges, improve development efficiency, and grow through community interaction. Feel free to comment and discuss.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
