Cloud Native 9 min read

How to Run Multiple Containers Sequentially in a Single Kubernetes Pod

This article explains why native Kubernetes Jobs run containers concurrently, then shows how to achieve true sequential execution within a single pod using initContainers, and compares three approaches—native Job, Volcano, and Argo—detailing configurations, code samples, and practical trade‑offs.

MaGe Linux Operations
MaGe Linux Operations
MaGe Linux Operations
How to Run Multiple Containers Sequentially in a Single Kubernetes Pod

Sometimes you need to run several containers one after another inside the same Kubernetes pod. A regular Kubernetes Job can launch multiple containers in a pod, but they start concurrently.

One workaround is to handle sequencing at the application level, for example by sharing a local volume and using file locks, but this adds development complexity. Using Kubernetes mechanisms directly reduces the workload.

Kubernetes Job with initContainers

Although containers cannot be run sequentially, initContainers run before the main containers and must finish without error. By placing the first tasks in initContainers and the final task in the regular containers section, you can achieve sequential execution.

Example YAML that runs three containers in order:

---
apiVersion: batch/v1
kind: Job
metadata:
  name: sequential-jobs
spec:
  backoffLimit: 0
  template:
    spec:
      restartPolicy: Never
      initContainers:
        - name: job-1
          image: alpine:3.11
          command:
            - 'sh'
            - '-c'
            - >
              for i in 1 2 3; do
                echo "job-1 `date`";
                sleep 1s;
              done;
              echo code > /srv/input/code
          volumeMounts:
            - mountPath: /srv/input/
              name: input
        - name: job-2
          image: alpine:3.11
          command:
            - 'sh'
            - '-c'
            - >
              for i in 1 2 3; do
                echo "job-2 `date`";
                sleep 1s;
              done;
              cat /srv/input/code &&
              echo artifact > /srv/input/output/artifact
          resources:
            requests:
              cpu: 3
          volumeMounts:
            - mountPath: /srv/input/
              name: input
            - mountPath: /srv/input/output/
              name: output
      containers:
        - name: job-3
          image: alpine:3.11
          command:
            - 'sh'
            - '-c'
            - >
              echo "job-1 and job-2 completed";
              sleep 3s;
              cat /srv/output/artifact
          volumeMounts:
            - mountPath: /srv/output/
              name: output
      volumes:
        - name: input
          emptyDir: {}
        - name: output
          emptyDir: {}
      securityContext:
        runAsUser: 2000
        runAsGroup: 2000
        fsGroup: 2000
backoffLimit: 0

disables retry on failure.

The volumes section defines two emptyDir volumes named input and output for data exchange. securityContext sets a specific UID/GID to avoid running as root.

Sample logs after the job finishes:

$ kubectl logs sequential-jobs-r4725 job-1
job-1 Tue Jul 28 07:50:10 UTC 2020
job-1 Tue Jul 28 07:50:11 UTC 2020
job-1 Tue Jul 28 07:50:12 UTC 2020
$ kubectl logs sequential-jobs-r4725 job-2
job-2 Tue Jul 28 07:50:13 UTC 2020
job-2 Tue Jul 28 07:50:14 UTC 2020
job-2 Tue Jul 28 07:50:15 UTC 2020
code
$ kubectl logs sequential-jobs-r4725 job-3
job-1 and job-2 completed
artifact

Volcano

Volcano (formerly kube‑batch) claims better scheduling for native Jobs, but it still cannot enforce container order. The YAML below mirrors the native approach with an extra tasks layer, offering no functional advantage.

---
apiVersion: batch.volcano.sh/v1alpha1
kind: Job
metadata:
  name: volcano-sequential-jobs
spec:
  minAvailable: 1
  schedulerName: volcano
  queue: default
  tasks:
    - replicas: 1
      name: "task-1"
      template:
        spec:
          restartPolicy: Never
          initContainers:
            - name: job-1
              image: alpine:3.11
              command:
                - 'sh'
                - '-c'
                - >
                  for i in 1 2 3; do
                    echo "job-1 `date`";
                    sleep 1s;
                  done;
                  echo code > /srv/input/code
              volumeMounts:
                - mountPath: /srv/input/
                  name: input
            - name: job-2
              image: alpine:3.11
              command:
                - 'sh'
                - '-c'
                - >
                  for i in 1 2 3; do
                    echo "job-2 `date`";
                    sleep 1s;
                  done;
                  cat /srv/input/code &&
                  echo artifact > /srv/input/output/artifact
              resources:
                requests:
                  cpu: 3
              volumeMounts:
                - mountPath: /srv/input/
                  name: input
                - mountPath: /srv/input/output/
                  name: output
          containers:
            - name: job-done
              image: alpine:3.11
              command:
                - 'sh'
                - '-c'
                - >
                  echo "job-1 and job-2 completed";
                  sleep 3s;
                  cat /srv/output/artifact
              volumeMounts:
                - mountPath: /srv/output/
                  name: output
          volumes:
            - name: input
              emptyDir: {}
            - name: output
              emptyDir: {}
          securityContext:
            runAsUser: 2000
            runAsGroup: 2000
            fsGroup: 2000

Logs are similar to the native Job. Documentation for Volcano is sparse and it is not compatible with kube‑batch, leaving many open questions.

Argo

Argo can express ordered, dependent tasks, but each task runs in its own pod. Pods may be scheduled on different nodes, requiring shared storage (e.g., NFS) for volume data, which can be a performance bottleneck for I/O‑intensive workloads.

Conclusion

Argo is conceptually the best fit for ordered execution without relying on initContainers, but its pod‑isolation model makes it unsuitable for this particular scenario. The native Kubernetes Job with initContainers remains the most practical solution, while Volcano needs further investigation before adoption.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

KubernetesJobVolcanoArgoinitContainersSequential Execution
MaGe Linux Operations
Written by

MaGe Linux Operations

Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.