Cloud Native 12 min read

Building Container Images in Containerd Environments: Docker, DinD, DaemonSet, Kaniko, and Jib

This article introduces the main image-building solutions for containerd runtimes, covering Docker-outside-Docker, DinD sidecars, DaemonSet deployments, and daemon-less tools such as Kaniko and Jib, with detailed YAML examples and usage instructions for CI/CD pipelines.

DevOps Cloud Academy
DevOps Cloud Academy
DevOps Cloud Academy
Building Container Images in Containerd Environments: Docker, DinD, DaemonSet, Kaniko, and Jib

When using Docker, most people run docker build to build images; when switching to containerd, other tools such as nerdctl + buildkit can be used, but additional options exist.

The article then presents the main image‑building tools and approaches for a containerd runtime.

Using Docker as an Image‑Build Service

In a Kubernetes cluster, CI/CD pipelines may still rely on Docker for image packaging. By mounting the host’s Docker UNIX socket ( /var/run/docker.sock ) into a pod via a hostPath volume, the pod can invoke Docker on the host – the “Docker outside of Docker” (DooD) pattern. This is simpler and more resource‑efficient than Docker‑in‑Docker, but it has drawbacks: it cannot run on containerd‑only clusters, may overwrite existing images, can affect other workloads when daemon configuration changes, and poses security risks in multi‑tenant environments because privileged access to the socket allows arbitrary container manipulation.

Using DinD as a Pod Sidecar

A sidecar container running Docker (DinD) can be added to the build pod, sharing an emptyDir volume for /var/run . The following pod spec demonstrates this pattern:

apiVersion: v1
kind: Pod
metadata:
  name: clean-ci
spec:
  containers:
  - name: dind
    image: 'docker:stable-dind'
    command:
    - dockerd
    - --host=unix:///var/run/docker.sock
    - --host=tcp://0.0.0.0:8000
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /var/run
      name: cache-dir
  - name: clean-ci
    image: 'docker:stable'
    command: ["/bin/sh"]
    args: ["-c", "docker info >/dev/null 2>&1; while [ $? -ne 0 ]; do sleep 3; docker info >/dev/null 2>&1; done; docker pull library/busybox:latest; docker save -o busybox-latest.tar library/busybox:latest; docker rmi library/busybox:latest; while true; do sleep 86400; done"]
    volumeMounts:
    - mountPath: /var/run
      name: cache-dir
  volumes:
  - name: cache-dir
    emptyDir: {}

With the sidecar providing dockerd , the main container can reach it via unix:///var/run/docker.sock .

Deploying Docker via DaemonSet

Alternatively, a DaemonSet can run a Docker daemon on every node, exposing the same socket through a hostPath volume. Example DaemonSet YAML:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: docker-ci
spec:
  selector:
    matchLabels:
      app: docker-ci
  template:
    metadata:
      labels:
        app: docker-ci
    spec:
      containers:
      - name: docker-ci
        image: 'docker:stable-dind'
        command:
        - dockerd
        - --host=unix:///var/run/docker.sock
        - --host=tcp://0.0.0.0:8000
        securityContext:
          privileged: true
        volumeMounts:
        - mountPath: /var/run
          name: host
      volumes:
      - name: host
        hostPath:
          path: /var/run

Pods that need to build images mount the same hostPath and use the socket just as with the sidecar approach.

Kaniko

Kaniko, an open‑source Google project, builds container images from a Dockerfile inside a container or Kubernetes pod without requiring a Docker daemon or privileged mode. It runs each Dockerfile instruction in user space, creates a snapshot after each step, and assembles image layers that are finally pushed to a remote registry.

A minimal Dockerfile example:

FROM alpine:latest
RUN apk add busybox-extras curl
CMD ["echo","Hello Kaniko"]

A Kaniko pod can be created as follows (the args specify the Dockerfile location, build context, and destination image):

apiVersion: v1
kind: Pod
metadata:
  name: kaniko
spec:
  containers:
  - name: kaniko
    image: gcr.io/kaniko-project/executor:latest
    args: ["--dockerfile=/workspace/Dockerfile","--context=/workspace/","--destination=cnych/kaniko-test:v0.0.1"]
    volumeMounts:
    - name: kaniko-secret
      mountPath: /kaniko/.docker
    - name: dockerfile
      mountPath: /workspace/Dockerfile
      subPath: Dockerfile
  volumes:
  - name: dockerfile
    configMap:
      name: dockerfile
  - name: kaniko-secret
    projected:
      sources:
      - secret:
          name: regcred
          items:
          - key: .dockerconfigjson
            path: config.json

The secret provides registry credentials in config.json (base64‑encoded username:password ).

Jib

Jib is a Google‑maintained tool for building Java container images without a Dockerfile or daemon. Integrated as a Maven or Gradle plugin, it leverages Docker layer caching and can push directly to a registry.

Gradle configuration example:

buildscript{
    ...
    dependencies {
        ...
        classpath "gradle.plugin.com.google.cloud.tools:jib-gradle-plugin:1.1.2"
    }
}
apply plugin: 'com.google.cloud.tools.jib'

jib{
    from{
        image = 'harbor.k8s.local/library/base:1.0'
        auth{
            username = '********'
            password = '********'
        }
    }
    to{
        image = 'harbor.k8s.local/library/xxapp:1.0'
        auth{
            username = '********'
            password = '********'
        }
    }
    container{
        jvmFlags = ['-Djava.security.egd=file:/dev/./urandom']
        ports = ['8080']
        useCurrentTimestamp = false
        workingDirectory = "/app"
    }
}

Build and push with gradle jib ; build locally with gradle jibDockerBuild .

Together with BuildKit, Buildah, and other daemon‑less builders, these tools enable container image creation without relying on a Docker daemon, fitting well into Kubernetes‑native CI/CD workflows.

DockerCI/CDKubernetescontainerdJibkanikoImage Building
DevOps Cloud Academy
Written by

DevOps Cloud Academy

Exploring industry DevOps practices and technical expertise.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.