Cloud Native 8 min read

How to Perform a Zero‑Downtime Kubernetes 1.30.x Upgrade

This guide explains how to upgrade a Kubernetes cluster from v1.30.0 to v1.30.1 without service interruption by backing up etcd, checking health, planning the rollout, upgrading master nodes and Calico, and using rolling updates and Istio canary releases for seamless application migration.

Full-Stack DevOps & Kubernetes
Full-Stack DevOps & Kubernetes
Full-Stack DevOps & Kubernetes
How to Perform a Zero‑Downtime Kubernetes 1.30.x Upgrade

Background

In cloud‑native environments Kubernetes is the foundation of enterprise infrastructure. Upgrading a cluster from v1.30.0 to v1.30.1 without service interruption demonstrates the required automation and SRE skills.

Zero‑downtime upgrade procedure

1. Backup and upgrade planning

Backup configuration and etcd data

for ns in $(kubectl get ns -o jsonpath="{.items[*].metadata.name}"); do
    kubectl get all --all-namespaces -o yaml > /backup/location/${ns}_backup.yaml
done

ETCDCTL_API=3 etcdctl snapshot save /backup/db-$(date +%Y-%m-%d_%H-%M-%S) \
    --endpoints=https://127.0.0.1:2379 \
    --cacert=/etc/kubernetes/pki/etcd/ca.crt \
    --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt \
    --key=/etc/kubernetes/pki/etcd/healthcheck-client.key

Verify cluster health

kubectl get nodes
kubectl get pods --all-namespaces

Define upgrade plan – schedule, owners, rollback strategy.

2. Upgrade Kubernetes components in phases

2.1 Master node upgrade

1) Prevent new pods from being scheduled on the master: kubectl cordon <master-node-name> 2) Upgrade kubeadm, kubectl and kubelet to the target version:

sudo apt-get update
sudo apt-get install -y kubeadm=1.30.1-00
sudo kubeadm upgrade apply v1.30.1
sudo apt-get install -y kubectl=1.30.1-00 kubelet=1.30.1-00
sudo systemctl daemon-reload
sudo systemctl restart kubelet

3) Return the master to a schedulable state: kubectl uncordon <master-node-name> 4) Confirm the master is healthy:

kubectl get nodes
kubectl get pods -n kube-system

2.2 Calico network component upgrade

Download the new manifest and apply it:

curl -O -L https://docs.projectcalico.org/v3.21/manifests/calico.yaml
kubectl apply -f calico.yaml

Verify Calico pods are running:

kubectl get pods -n kube-system -l k8s-app=calico-node

3. Application service migration

3.1 Rolling update of a Deployment

Define a rolling‑update strategy in the Deployment manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-service
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1

Update the container image to the new version:

kubectl set image deployment/my-service my-container=my-image:1.30.1

3.2 Traffic shifting with Istio Canary

Install Istio (demo profile) and add it to the PATH:

curl -L https://istio.io/downloadIstio | sh -
cd istio-1.13.1
export PATH=$PWD/bin:$PATH
istioctl install --set profile=demo -y

Create a VirtualService that routes a percentage of traffic to the new version:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: my-service
spec:
  hosts:
  - my-service
  http:
  - route:
    - destination:
        host: my-service
        subset: v1
      weight: 80
    - destination:
        host: my-service
        subset: v2
      weight: 20

After validation, gradually increase the weight for v2 until it reaches 100 % to complete the upgrade.

Conclusion

By performing a backup, verifying health, upgrading control‑plane and network components in isolated phases, and rolling out application changes with controlled traffic shifting, a Kubernetes cluster can be moved from v1.30.0 to v1.30.1 without observable downtime.

KubernetesIstioUpgradeZero DowntimeCalico
Full-Stack DevOps & Kubernetes
Written by

Full-Stack DevOps & Kubernetes

Focused on sharing DevOps, Kubernetes, Linux, Docker, Istio, microservices, Spring Cloud, Python, Go, databases, Nginx, Tomcat, cloud computing, and related technologies.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.