Cloud Native 17 min read

How kubelet Uses CRI and dockershim to Create Pods – A Deep Dive

This article provides a detailed analysis of the kubelet CRI pod creation workflow, using the dockershim implementation as an example, and includes step‑by‑step code walkthroughs of SyncPod, sandbox creation, and the interactions between kubelet, CRI shim, Docker, and CNI.

Qingyun Technology Community
Qingyun Technology Community
Qingyun Technology Community
How kubelet Uses CRI and dockershim to Create Pods – A Deep Dive

kubelet CRI Pod Creation Call Flow

The article analyzes the kubelet dockershim pod creation process as an example of the CRI call flow.

kubelet calls the built‑in CRI shim dockershim , which in turn calls Docker to create and start containers and uses CNI to set up the pod network.

The dockershim is a kubelet‑built‑in CRI shim; other remote CRI shims follow the same call pattern but use different container engines.

Detailed Source Analysis

The SyncPod method of kubeGenericRuntimeManager triggers the CRI pod creation logic. Its steps are:

Create and start the pod sandbox container, building the pod network.

Create and start ephemeral containers.

Create and start init containers.

Create and start normal (business) containers.

Key calls include m.createPodSandbox and m.startContainer.

// pkg/kubelet/kuberuntime/kuberuntime_manager.go
// SyncPod syncs the running pod into the desired pod by executing following steps:
//   1. Compute sandbox and container changes.
//   2. Kill pod sandbox if necessary.
//   3. Kill any containers that should not be running.
//   4. Create sandbox if necessary.
//   5. Create ephemeral containers.
//   6. Create init containers.
//   7. Create normal containers.
func (m *kubeGenericRuntimeManager) SyncPod(pod *v1.Pod, podStatus *kubecontainer.PodStatus, pullSecrets []v1.Secret, backOff *flowcontrol.Backoff) (result kubecontainer.PodSyncResult) {
    // ...
    // Step 4: Create a sandbox
    podSandboxID := podContainerChanges.SandboxID
    if podContainerChanges.CreateSandbox {
        var msg string
        var err error
        klog.V(4).Infof("Creating sandbox for pod %q", format.Pod(pod))
        createSandboxResult := kubecontainer.NewSyncResult(kubecontainer.CreatePodSandbox, format.Pod(pod))
        result.AddSyncResult(createSandboxResult)
        podSandboxID, msg, err = m.createPodSandbox(pod, podContainerChanges.Attempt)
        // ...
    }
}

The m.createPodSandbox method calls m.runtimeService.RunPodSandbox, which invokes the remote CRI shim's RunPodSandbox method.

// pkg/kubelet/kuberuntime/kuberuntime_sandbox.go
func (m *kubeGenericRuntimeManager) createPodSandbox(pod *v1.Pod, attempt uint32) (string, string, error) {
    podSandboxConfig, err := m.generatePodSandboxConfig(pod, attempt)
    if err != nil {
        message := fmt.Sprintf("GeneratePodSandboxConfig for pod %q failed: %v", format.Pod(pod), err)
        klog.Error(message)
        return "", message, err
    }
    // Create pod logs directory
    err = m.osInterface.MkdirAll(podSandboxConfig.LogDirectory, 0755)
    if err != nil {
        message := fmt.Sprintf("Create pod log directory for pod %q failed: %v", format.Pod(pod), err)
        klog.Errorf(message)
        return "", message, err
    }
    // Determine runtime handler if RuntimeClass is used
    runtimeHandler := ""
    if utilfeature.DefaultFeatureGate.Enabled(features.RuntimeClass) && m.runtimeClassManager != nil {
        runtimeHandler, err = m.runtimeClassManager.LookupRuntimeHandler(pod.Spec.RuntimeClassName)
        if err != nil {
            message := fmt.Sprintf("CreatePodSandbox for pod %q failed: %v", format.Pod(pod), err)
            return "", message, err
        }
        if runtimeHandler != "" {
            klog.V(2).Infof("Running pod %s with RuntimeHandler %q", format.Pod(pod), runtimeHandler)
        }
    }
    podSandBoxID, err := m.runtimeService.RunPodSandbox(podSandboxConfig, runtimeHandler)
    if err != nil {
        message := fmt.Sprintf("CreatePodSandbox for pod %q failed: %v", format.Pod(pod), err)
        klog.Error(message)
        return "", message, err
    }
    return podSandBoxID, "", nil
}

The remote runtime service implementation forwards the request to the CRI shim client:

// pkg/kubelet/remote/remote_runtime.go
func (r *RemoteRuntimeService) RunPodSandbox(config *runtimeapi.PodSandboxConfig, runtimeHandler string) (string, error) {
    ctx, cancel := getContextWithTimeout(r.timeout * 2)
    defer cancel()
    resp, err := r.runtimeClient.RunPodSandbox(ctx, &runtimeapi.RunPodSandboxRequest{Config: config, RuntimeHandler: runtimeHandler})
    if err != nil {
        klog.Errorf("RunPodSandbox from runtime service failed: %v", err)
        return "", err
    }
    if resp.PodSandboxId == "" {
        errorMessage := fmt.Sprintf("PodSandboxId is not set for sandbox %q", config.GetMetadata())
        klog.Errorf("RunPodSandbox failed: %s", errorMessage)
        return "", errors.New(errorMessage)
    }
    return resp.PodSandboxId, nil
}

Within the dockershim implementation, RunPodSandbox performs five steps:

Pull the sandbox image.

Create the sandbox container.

Create a sandbox checkpoint.

Start the sandbox container.

Set up networking via CNI.

// pkg/kubelet/dockershim/docker_sandbox.go
func (ds *dockerService) RunPodSandbox(ctx context.Context, r *runtimeapi.RunPodSandboxRequest) (*runtimeapi.RunPodSandboxResponse, error) {
    config := r.GetConfig()
    // Step 1: Pull the image for the sandbox.
    image := defaultSandboxImage
    if podSandboxImage := ds.podSandboxImage; len(podSandboxImage) != 0 {
        image = podSandboxImage
    }
    if err := ensureSandboxImageExists(ds.client, image); err != nil {
        return nil, err
    }
    // Step 2: Create the sandbox container.
    if r.GetRuntimeHandler() != "" && r.GetRuntimeHandler() != runtimeName {
        return nil, fmt.Errorf("RuntimeHandler %q not supported", r.GetRuntimeHandler())
    }
    createConfig, err := ds.makeSandboxDockerConfig(config, image)
    if err != nil {
        return nil, fmt.Errorf("failed to make sandbox docker config for pod %q: %v", config.Metadata.Name, err)
    }
    createResp, err := ds.client.CreateContainer(*createConfig)
    if err != nil {
        createResp, err = recoverFromCreationConflictIfNeeded(ds.client, *createConfig, err)
    }
    if err != nil || createResp == nil {
        return nil, fmt.Errorf("failed to create a sandbox for pod %q: %v", config.Metadata.Name, err)
    }
    resp := &runtimeapi.RunPodSandboxResponse{PodSandboxId: createResp.ID}
    ds.setNetworkReady(createResp.ID, false)
    defer func(e *error) {
        if *e == nil {
            ds.setNetworkReady(createResp.ID, true)
        }
    }(&err)
    // Step 3: Create Sandbox Checkpoint.
    if err = ds.checkpointManager.CreateCheckpoint(createResp.ID, constructPodSandboxCheckpoint(config)); err != nil {
        return nil, err
    }
    // Step 4: Start the sandbox container.
    if err = ds.client.StartContainer(createResp.ID); err != nil {
        return nil, fmt.Errorf("failed to start sandbox container for pod %q: %v", config.Metadata.Name, err)
    }
    // Step 5: Set up networking via CNI.
    if config.GetLinux().GetSecurityContext().GetNamespaceOptions().GetNetwork() == runtimeapi.NamespaceMode_NODE {
        return resp, nil
    }
    cID := kubecontainer.BuildContainerID(runtimeName, createResp.ID)
    networkOptions := map[string]string{}
    if dnsConfig := config.GetDnsConfig(); dnsConfig != nil {
        dnsOption, err := json.Marshal(dnsConfig)
        if err != nil {
            return nil, fmt.Errorf("failed to marshal dns config for pod %q: %v", config.Metadata.Name, err)
        }
        networkOptions["dns"] = string(dnsOption)
    }
    if err = ds.network.SetUpPod(config.GetMetadata().Namespace, config.GetMetadata().Name, cID, config.Annotations, networkOptions); err != nil {
        // Cleanup on failure
        ds.network.TearDownPod(config.GetMetadata().Namespace, config.GetMetadata().Name, cID)
        ds.client.StopContainer(createResp.ID, defaultSandboxGracePeriod)
        return resp, err
    }
    return resp, nil
}

The Docker client call chain is:

// pkg/kubelet/dockershim/libdocker/kube_docker_client.go
func (d *kubeDockerClient) CreateContainer(opts dockertypes.ContainerCreateConfig) (*dockercontainer.ContainerCreateCreatedBody, error) {
    ctx, cancel := d.getTimeoutContext()
    defer cancel()
    if opts.HostConfig != nil && opts.HostConfig.ShmSize <= 0 {
        opts.HostConfig.ShmSize = defaultShmSize
    }
    createResp, err := d.client.ContainerCreate(ctx, opts.Config, opts.HostConfig, opts.NetworkingConfig, opts.Name)
    if ctxErr := contextError(ctx); ctxErr != nil {
        return nil, ctxErr
    }
    if err != nil {
        return nil, err
    }
    return &createResp, nil
}
// vendor/github.com/docker/docker/client/container_create.go
func (cli *Client) ContainerCreate(ctx context.Context, config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, containerName string) (container.ContainerCreateCreatedBody, error) {
    var response container.ContainerCreateCreatedBody
    query := url.Values{}
    if containerName != "" {
        query.Set("name", containerName)
    }
    body := configWrapper{Config: config, HostConfig: hostConfig, NetworkingConfig: networkingConfig}
    serverResp, err := cli.post(ctx, "/containers/create", query, body, nil)
    defer ensureReaderClosed(serverResp)
    if err != nil {
        return response, err
    }
    err = json.NewDecoder(serverResp.body).Decode(&response)
    return response, err
}

Finally, the article summarizes that the CRI call flow in kubelet is consistent across different CRI shims; dockershim is the built‑in implementation for Docker, while external shims support runtimes such as containerd or CRI‑O.

Summary

The kubelet creates a pod by first creating a sandbox container and network, then launching ephemeral, init, and normal containers. The process is orchestrated through the CRI shim (dockershim in this example), which delegates to the underlying container runtime (Docker) and CNI plugins.

CRI Architecture Diagram

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

KubernetesCRIDockershimContainer RuntimePod Lifecycle
Qingyun Technology Community
Written by

Qingyun Technology Community

Official account of the Qingyun Technology Community, focusing on tech innovation, supporting developers, and sharing knowledge. Born to Learn and Share!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.