Cloud Native 15 min read

Step-by-Step Guide to Upgrading a Kubernetes Cluster to v1.15.12

This guide walks through downloading the latest Kubernetes packages, preparing master and node services, adjusting nginx proxy settings, safely cordoning and draining nodes, installing the new version, updating certificates and scripts, restarting services, and rebalancing pods to complete a seamless cluster upgrade to v1.15.12.

MaGe Linux Operations
MaGe Linux Operations
MaGe Linux Operations
Step-by-Step Guide to Upgrading a Kubernetes Cluster to v1.15.12

1. Software Package Download

Download the latest Kubernetes package from GitHub (https://github.com/).

2. Upgrade Notes

Upgrade includes master node and worker node; this chapter upgrades to v1.15.12.

Master services: apiserver, controller‑manager, kube‑scheduler.

Node services: kubelet and kube‑proxy.

Because apiserver is behind an nginx proxy, comment out the target node in nginx during upgrade to avoid loss of access.

Master and node run on the same physical server, so they are upgraded together.

3. Determine Node Upgrade Order

Check node information:

[root@hdss7-21 ~]# kubectl get node
NAME                STATUS   ROLES    AGE   VERSION
hdss7-21.host.com   Ready    <none>   14d   v1.14.10
hdss7-22.host.com   Ready    <none>   14d   v1.14.10

Check pod distribution and prefer nodes with fewer pods for migration.

[root@hdss7-21 ~]# kubectl get pod -o wide -n kube-system
NAME               READY   STATUS    RESTARTS   AGE   IP          NODE                ...
...

Based on the distribution, choose the node on server 10.4.7.21 for the first upgrade.

4. Modify nginx Proxy Configuration

On both 10.4.7.21 and 10.4.7.22 (example shown for 21): comment out the apiserver upstream entry for the node being upgraded.

# vim /etc/nginx/nginx.conf
upstream kube-apiserver {
    # server 10.4.7.21:6443 max_fails=3 fail_timeout=30s;
    server 10.4.7.22:6443 max_fails=3 fail_timeout=30s;
}

Test and reload nginx:

# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
# nginx -s reload

5. Delete the First Node

Mark the node as unschedulable:

# kubectl cordon hdss7-21.host.com
node/hdss7-21.host.com cordoned

Drain the node, deleting local data, ignoring DaemonSets, and forcing removal of all pods:

# kubectl drain hdss7-21.host.com --delete-local-data --ignore-daemonsets --force
... evicting pods ...
node/hdss7-21.host.com evicted

Explanation of flags: --delete-local-data: also removes pods using emptyDir. --ignore-daemonsets: skips DaemonSet‑managed pods to avoid immediate recreation. --force: also deletes pods not owned by a controller.

6. Upgrade the First Node

Extract the new version and replace binaries:

# cd /opt/src/
# tar -zxvf kubernetes-server-linux-amd64-v1.15.12.tar.gz
# mv kubernetes /opt/kubernetes-v1.15.12
# cd /opt/kubernetes-v1.15.12/server/bin/
# rm -f *.tar *_tag
# ll   (list binaries)

Copy certificates:

# cp /opt/kubernetes/server/bin/certs/* certs/
# ls certs/
apiserver-key.pem  ca-key.pem  client-key.pem  kubelet-key.pem  kube-proxy-client-key.pem
apiserver.pem      ca.pem      client.pem      kubelet.pem      kube-proxy-client.pem

Copy startup scripts:

# cp /opt/kubernetes/server/bin/*.sh .
# ls
apiextensions-apiserver  kube-apiserver-startup.sh  ...

Copy configuration files:

# cp /opt/kubernetes/conf/* /opt/kubernetes-v1.15.12/conf/
# ls /opt/kubernetes-v1.15.12/conf/
audit.yaml  k8s-node.yaml  kubelet.kubeconfig  kube-proxy.kubeconfig  nginx-ds.yaml

Recreate the symlink:

# cd /opt/
# rm -rf kubernetes
# ln -s /opt/kubernetes-v1.15.12 /opt/kubernetes

7. Restart Node Services

# supervisorctl status
etcd-server-7-21                 RUNNING   pid 6296, uptime 0:16:14
flanneld-7-21                    RUNNING   pid 7042, uptime 0:13:14
kube-apiserver-7-21              RUNNING   pid 7165, uptime 0:12:24
kube-controller-manager-7-21     RUNNING   pid 4675, uptime 0:19:03
kube-kubelet-7-21                RUNNING   pid 7184, uptime 0:12:16
kube-proxy-7-21                  RUNNING   pid 4678, uptime 0:19:03
kube-scheduler-7-21              RUNNING   pid 4673, uptime 0:19:03

Restart kubelet and kube‑proxy:

# supervisorctl restart kube-kubelet-7-21
# supervisorctl restart kube-proxy-7-21

8. Verify Versions

# kubectl get node
NAME                STATUS   ROLES    AGE   VERSION
hdss7-21.host.com   Ready    <none>   4d22h   v1.15.12
hdss7-22.host.com   Ready    <none>   19d    v1.14.10

9. Restart Master Services

# supervisorctl restart kube-apiserver-7-21
# supervisorctl restart kube-controller-manager-7-21
# supervisorctl restart kube-scheduler-7-21

Monitor logs to ensure successful start‑up.

10. Restore nginx Proxy Configuration

# vim /etc/nginx/nginx.conf
upstream kube-apiserver {
    server 10.4.7.21:6443 max_fails=3 fail_timeout=30s;
    server 10.4.7.22:6443 max_fails=3 fail_timeout=30s;
}
# nginx -t && nginx -s reload

11. Test the Operation Platform

12. Re‑allocate Pods

Most pods were on 10.4.7.21; delete a pod (e.g., coredns) from the dashboard so the scheduler places it on the less‑loaded node 10.4.7.22.

Pod termination and recreation:

After restart:

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

KubernetesCluster UpgradekubectlNode Maintenancev1.15.12
MaGe Linux Operations
Written by

MaGe Linux Operations

Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.