Cloud Native 8 min read

Understanding Pod Startup and Shutdown Phases in Kubernetes Rolling Deployments

This article explains how Kubernetes rolling deployments affect pod lifecycle, detailing the startup and shutdown phases, the impact of readiness probes and endpoint updates, and how to prevent connection interruptions by using preStop hooks and proper configuration for zero‑downtime deployments.

DevOps Cloud Academy
DevOps Cloud Academy
DevOps Cloud Academy
Understanding Pod Startup and Shutdown Phases in Kubernetes Rolling Deployments

Kubernetes rolling deployments replace old containers with new ones gradually, which can introduce brief periods of downtime. For low‑traffic services this may be negligible, but for critical applications such as payment gateways even a second of interruption is unacceptable.

During a rolling update two main events occur: what happens when a Pod starts and what happens when a Pod stops. The tutorial assumes familiarity with basic Kubernetes concepts and Docker experience.

Pod Startup Phase

When a Pod is launched without a readiness probe, the endpoint controller immediately adds the Pod’s IP to the Service, causing it to receive traffic before it is ready, which can lead to instability. Adding a readiness probe ensures the Pod only receives traffic after it reports ready.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-highly-available-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: highly-available
  template:
    metadata:
      labels:
        app: highly-available
    spec:
      containers:
      - name: highly-available
        image: highly-available-image:latest
        ports:
        - containerPort: 80
        readinessProbe:
          httpGet:
            path: /health-check
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 10
          failureThreshold: 3

Pod Shutdown Phase

When the API server receives a delete request for a Pod, it updates the Pod’s status in etcd, notifies the endpoint controller and Kubelet. The endpoint controller removes the Pod’s IP from all Services, and KubeProxy updates iptables rules to stop routing new traffic to the terminating Pod. However, the container may be killed by Kubelet before the iptables update completes, causing a brief window where traffic is still directed to a terminating Pod, resulting in connection errors.

Solution

To eliminate this race condition, a preStop hook can be added to the container lifecycle. The hook makes the container pause (e.g., sleep 20 ) before it exits, giving KubeProxy enough time to update iptables and route new connections to healthy Pods.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-highly-available-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: highly-available
  template:
    metadata:
      labels:
        app: highly-available
    spec:
      containers:
      - name: highly-available
        image: highly-available-image:latest
        ports:
        - containerPort: 80
        readinessProbe:
          httpGet:
            path: /health-check
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 10
          failureThreshold: 3
        lifecycle:
          preStop:
            exec:
              command: ["/bin/bash", "-c", "sleep 20"]

It is important to ensure the sleep duration is less than terminationGracePeriodSeconds (default 30 seconds); otherwise the container may be force‑killed.

Conclusion

By adding readiness probes and a preStop hook to the deployment configuration, rolling updates can be performed without causing user‑visible connection interruptions, enabling stable, zero‑downtime deployments even when many versions are released daily.

Kuberneteszero downtimePreStop HookReadiness ProbePod Lifecyclerolling deployment
DevOps Cloud Academy
Written by

DevOps Cloud Academy

Exploring industry DevOps practices and technical expertise.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.