Operations 20 min read

How to Upgrade a Single‑Master Kubernetes Cluster to a Multi‑Master High‑Availability Setup

This guide walks through converting a single‑master Kubernetes cluster into a highly available multi‑master deployment by configuring a load‑balancing Nginx front‑end, updating API server certificates with additional SAN entries, adjusting kubeconfig files, and adding extra control‑plane nodes while verifying etcd health.

DevOps Cloud Academy
DevOps Cloud Academy
DevOps Cloud Academy
How to Upgrade a Single‑Master Kubernetes Cluster to a Multi‑Master High‑Availability Setup

In production environments a single‑master Kubernetes cluster poses high risk, so a highly available (HA) control plane is required for components such as kube‑apiserver and etcd. This tutorial demonstrates upgrading a single master to an HA cluster using a simple Nginx load‑balancer.

First, ensure all nodes have Docker installed and update the /etc/hosts file with the master and node hostnames and IPs.

$ cat /etc/hosts
127.0.0.1 api.k8s.local
10.151.30.70 ydzs-master2
10.151.30.71 ydzs-master3
10.151.30.11 ydzs-master
10.151.30.57 ydzs-node3
10.151.30.59 ydzs-node4
10.151.30.60 ydzs-node5
10.151.30.62 ydzs-node6
10.151.30.22 ydzs-node1
10.151.30.23 ydzs-node2

Export the current kubeadm configuration, add the required certSANs entries (including api.k8s.local and the new master hostnames/IPs), and regenerate the API server certificate.

$ kubectl -n kube-system get configmap kubeadm-config -o jsonpath='{.data.ClusterConfiguration}' > kubeadm.yaml
apiServer:
  certSANs:
  - api.k8s.local
  - ydzs-master
  - ydzs-master2
  - ydzs-master3
  - 10.151.30.11
  - 10.151.30.70
  - 10.151.30.71
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
$ mv /etc/kubernetes/pki/apiserver.{crt,key} ~
$ kubeadm init phase certs apiserver --config kubeadm.yaml

Restart the API server container so it picks up the new certificate.

$ docker ps | grep kube-apiserver | grep -v pause
7fe227a5dd3c   ...   "kube-apiserver --ad…"
$ docker kill 7fe227a5dd3c

Deploy an Nginx load‑balancer on every node to proxy the API server traffic.

$ mkdir -p /etc/kubernetes
$ cat > /etc/kubernetes/nginx.conf <

Update all kubeconfig files (kubelet, controller‑manager, scheduler, kube‑proxy, and ~/.kube/config ) to point to https://api.k8s.local:8443 , then restart the affected components.

# edit /etc/kubernetes/kubelet.conf
server: https://api.k8s.local:8443
# restart kubelet
systemctl restart kubelet
# similarly edit controller‑manager.conf, scheduler.conf, and kube‑proxy ConfigMap

Add the controlPlaneEndpoint to the kubeadm configuration and upload it back to the cluster.

$ vi kubeadm.yaml
controlPlaneEndpoint: api.k8s.local:8443
... (add certSANs as shown above) ...
$ kubeadm config upload from-file --config kubeadm.yaml

Update the cluster-info ConfigMap in the kube-public namespace to expose the new load‑balancer address.

$ kubectl -n kube-public edit cm cluster-info
... set server: https://api.k8s.local:8443 ...

Upload certificates for the new control‑plane nodes, generate a fresh join token, and join the additional masters using the --control-plane flag.

$ kubeadm init phase upload-certs --upload-certs
$ kubeadm token create --print-join-command --config kubeadm.yaml
kubeadm join api.k8s.local:8443 --token ... --discovery-token-ca-cert-hash sha256:... --control-plane --certificate-key ...

On each new master, install the same Kubernetes binaries, pull required images, and execute the join command. Verify that the new masters appear in kubectl get nodes and that the etcd cluster reports healthy endpoints.

$ kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
ydzs-master    Ready    master   299d  v1.17.11
ydzs-master2   Ready    master   34m   v1.17.11
ydzs-master3   Ready    master   10m   v1.17.11
... (other nodes) ...

After these steps the Kubernetes control plane is now HA, with three master nodes behind an Nginx load‑balancer and updated certificates covering all required SAN entries.

KubernetesnginxHAkubeadmloadbalancer
DevOps Cloud Academy
Written by

DevOps Cloud Academy

Exploring industry DevOps practices and technical expertise.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.