Cloud Native 28 min read

How to Build a High‑Availability Kubernetes Cluster: Kubeadm & Binary Package Guide

This comprehensive tutorial walks you through planning, preparing hardware, choosing deployment methods, and step‑by‑step installation of a highly available Kubernetes cluster using kubeadm and manual binary packages, covering system initialization, certificate generation, component configuration, CNI networking, and cluster verification.

Linux Cloud Computing Practice
Linux Cloud Computing Practice
Linux Cloud Computing Practice
How to Build a High‑Availability Kubernetes Cluster: Kubeadm & Binary Package Guide

1. Planning the K8s Environment

1.1 Single‑master cluster

1.2 Multi‑master cluster

2. Server Hardware Requirements

CentOS 7.x (x86_64) on one or more servers

Memory ≥ 2 GB, CPU ≥ 2 cores, Disk ≥ 30 GB

Network connectivity between all machines

Internet access for pulling images

Swap must be disabled

3. Deployment Methods

kubeadm : quick deployment using kubeadm init and kubeadm join Binary packages : manual installation of each component for deeper learning and control

4. Deploying with kubeadm

4.1 System Initialization (common to all nodes)

Disable firewalld:

systemctl stop firewalld
systemctl disable firewalld

Disable SELinux (permanent):

sed -i 's/enforcing/disabled/' /etc/selinux/config
reboot

Disable swap (permanent):

sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a
reboot

Set hostnames:

hostnamectl set-hostname k8s-master   # on 192.168.217.100
hostnamectl set-hostname k8s-node1   # on 192.168.217.101
hostnamectl set-hostname k8s-node2   # on 192.168.217.102

Add hosts entries on each node:

cat >> /etc/hosts <<EOF
192.168.217.100 k8s-master
192.168.217.101 k8s-node1
192.168.217.102 k8s-node2
EOF

Enable bridge traffic to iptables:

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
modprobe br_netfilter
sysctl --system

Synchronize time:

yum install -y ntpdate
ntpdate time.windows.com

Install ipset and ipvsadm, then load ipvs modules:

yum -y install ipset ipvsadm
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack_ipv4

4.2 Install Docker (required CRI)

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.3.ce-3.el7
systemctl enable docker && systemctl start docker
docker version
mkdir -p /etc/docker
cat > /etc/docker/daemon.json <<'EOF'
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker

4.3 Install kubeadm, kubelet, kubectl

cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
systemctl enable kubelet

4.4 Initialize the Master Node

kubeadm init \
  --apiserver-advertise-address=192.168.217.100 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.18.0 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16

After init, set up kubectl for the root user:

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

4.5 Join Worker Nodes

kubeadm join 192.168.217.100:6443 \
  --token 4016im.eg4e10yamcbxjm59 \
  --discovery-token-ca-cert-hash sha256:ce2111ce594e5189255144a72268250e5eedda87470cc3a1f69f8c973927699e

Generate a new token if the previous one expires:

kubeadm token create --print-join-command
kubeadm token create --ttl 0

4.6 Deploy a CNI Plugin (Flannel)

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml

Verify pods in the kube-system namespace and node status:

kubectl get pods -n kube-system
kubectl get nodes

5. Deploying with Binary Packages (Advanced)

5.1 Prepare etcd Cluster

Install cfssl tools and generate a CA:

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

Create ca-config.json and ca-csr.json, then generate the CA certificate:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

Generate server certificates for the three etcd members (etcd‑1, etcd‑2, etcd‑3) and place them under /opt/etcd/ssl. Create an etcd.conf file for each node with appropriate ETCD_NAME, ETCD_DATA_DIR, and URLs, then define a systemd service /usr/lib/systemd/system/etcd.service that starts etcd with TLS options.

5.2 Generate API Server Certificates

Use the same cfssl workflow to create a CA and server certificates for the Kubernetes API server, placing them in /opt/kubernetes/ssl.

5.3 Deploy Master Components

kube-apiserver : create /opt/kubernetes/cfg/kube-apiserver.conf with options for etcd endpoints, secure port, certificates, and audit logging. Register a systemd unit /usr/lib/systemd/system/kube-apiserver.service and start it.

kube-controller-manager : configure /opt/kubernetes/cfg/kube-controller-manager.conf (leader election, CIDR ranges, signing certs) and a systemd unit /usr/lib/systemd/system/kube-controller-manager.service.

kube-scheduler : configure /opt/kubernetes/cfg/kube-scheduler.conf (leader election, master address) and a systemd unit /usr/lib/systemd/system/kube-scheduler.service.

5.4 TLS Bootstrapping for Nodes

Create a bootstrap token and a token.csv file, then generate bootstrap.kubeconfig and kube-proxy.kubeconfig using kubectl config commands. Distribute these files to each worker node.

5.5 Deploy Node Components

kubelet : create /opt/kubernetes/cfg/kubelet.conf on each node (hostname override, CNI plugin, kubeconfig paths, pod infra image). Register a systemd unit /usr/lib/systemd/system/kubelet.service and enable it.

kube-proxy : create /opt/kubernetes/cfg/kube-proxy.conf with hostname override and kubeconfig, then register /usr/lib/systemd/system/kube-proxy.service.

5.6 Deploy CNI Plugin on Workers

Repeat the Flannel deployment steps (download kube-flannel.yml and apply with kubectl) on the worker nodes.

5.7 Verify Cluster Health

kubectl get cs
kubectl cluster-info
kubectl get nodes

All components should report Healthy and the nodes should be in Ready state.

6. Certificate Management & CSR Approval

Node kubelet certificate signing requests appear via kubectl get csr. Approve them with: kubectl certificate approve <csr-name> After approval, the kubelet obtains its client certificate automatically.

7. Clean‑up of Promotional Content

The tutorial portion ends here; promotional sections about courses, QR codes, and giveaways have been omitted to focus on the technical guide.

DockerKubernetesTLSetcdCNIkubeadm
Linux Cloud Computing Practice
Written by

Linux Cloud Computing Practice

Welcome to Linux Cloud Computing Practice. We offer high-quality articles on Linux, cloud computing, DevOps, networking and related topics. Dive in and start your Linux cloud computing journey!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.