How to Build a High‑Availability Kubernetes Cluster: Kubeadm & Binary Package Guide
This comprehensive tutorial walks you through planning, preparing hardware, choosing deployment methods, and step‑by‑step installation of a highly available Kubernetes cluster using kubeadm and manual binary packages, covering system initialization, certificate generation, component configuration, CNI networking, and cluster verification.
1. Planning the K8s Environment
1.1 Single‑master cluster
1.2 Multi‑master cluster
2. Server Hardware Requirements
CentOS 7.x (x86_64) on one or more servers
Memory ≥ 2 GB, CPU ≥ 2 cores, Disk ≥ 30 GB
Network connectivity between all machines
Internet access for pulling images
Swap must be disabled
3. Deployment Methods
kubeadm : quick deployment using kubeadm init and kubeadm join Binary packages : manual installation of each component for deeper learning and control
4. Deploying with kubeadm
4.1 System Initialization (common to all nodes)
Disable firewalld:
systemctl stop firewalld
systemctl disable firewalldDisable SELinux (permanent):
sed -i 's/enforcing/disabled/' /etc/selinux/config
rebootDisable swap (permanent):
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a
rebootSet hostnames:
hostnamectl set-hostname k8s-master # on 192.168.217.100
hostnamectl set-hostname k8s-node1 # on 192.168.217.101
hostnamectl set-hostname k8s-node2 # on 192.168.217.102Add hosts entries on each node:
cat >> /etc/hosts <<EOF
192.168.217.100 k8s-master
192.168.217.101 k8s-node1
192.168.217.102 k8s-node2
EOFEnable bridge traffic to iptables:
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
modprobe br_netfilter
sysctl --systemSynchronize time:
yum install -y ntpdate
ntpdate time.windows.comInstall ipset and ipvsadm, then load ipvs modules:
yum -y install ipset ipvsadm
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack_ipv44.2 Install Docker (required CRI)
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.3.ce-3.el7
systemctl enable docker && systemctl start docker
docker version
mkdir -p /etc/docker
cat > /etc/docker/daemon.json <<'EOF'
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker4.3 Install kubeadm, kubelet, kubectl
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
systemctl enable kubelet4.4 Initialize the Master Node
kubeadm init \
--apiserver-advertise-address=192.168.217.100 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.18.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16After init, set up kubectl for the root user:
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config4.5 Join Worker Nodes
kubeadm join 192.168.217.100:6443 \
--token 4016im.eg4e10yamcbxjm59 \
--discovery-token-ca-cert-hash sha256:ce2111ce594e5189255144a72268250e5eedda87470cc3a1f69f8c973927699eGenerate a new token if the previous one expires:
kubeadm token create --print-join-command
kubeadm token create --ttl 04.6 Deploy a CNI Plugin (Flannel)
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.ymlVerify pods in the kube-system namespace and node status:
kubectl get pods -n kube-system
kubectl get nodes5. Deploying with Binary Packages (Advanced)
5.1 Prepare etcd Cluster
Install cfssl tools and generate a CA:
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljsonCreate ca-config.json and ca-csr.json, then generate the CA certificate:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -Generate server certificates for the three etcd members (etcd‑1, etcd‑2, etcd‑3) and place them under /opt/etcd/ssl. Create an etcd.conf file for each node with appropriate ETCD_NAME, ETCD_DATA_DIR, and URLs, then define a systemd service /usr/lib/systemd/system/etcd.service that starts etcd with TLS options.
5.2 Generate API Server Certificates
Use the same cfssl workflow to create a CA and server certificates for the Kubernetes API server, placing them in /opt/kubernetes/ssl.
5.3 Deploy Master Components
kube-apiserver : create /opt/kubernetes/cfg/kube-apiserver.conf with options for etcd endpoints, secure port, certificates, and audit logging. Register a systemd unit /usr/lib/systemd/system/kube-apiserver.service and start it.
kube-controller-manager : configure /opt/kubernetes/cfg/kube-controller-manager.conf (leader election, CIDR ranges, signing certs) and a systemd unit /usr/lib/systemd/system/kube-controller-manager.service.
kube-scheduler : configure /opt/kubernetes/cfg/kube-scheduler.conf (leader election, master address) and a systemd unit /usr/lib/systemd/system/kube-scheduler.service.
5.4 TLS Bootstrapping for Nodes
Create a bootstrap token and a token.csv file, then generate bootstrap.kubeconfig and kube-proxy.kubeconfig using kubectl config commands. Distribute these files to each worker node.
5.5 Deploy Node Components
kubelet : create /opt/kubernetes/cfg/kubelet.conf on each node (hostname override, CNI plugin, kubeconfig paths, pod infra image). Register a systemd unit /usr/lib/systemd/system/kubelet.service and enable it.
kube-proxy : create /opt/kubernetes/cfg/kube-proxy.conf with hostname override and kubeconfig, then register /usr/lib/systemd/system/kube-proxy.service.
5.6 Deploy CNI Plugin on Workers
Repeat the Flannel deployment steps (download kube-flannel.yml and apply with kubectl) on the worker nodes.
5.7 Verify Cluster Health
kubectl get cs
kubectl cluster-info
kubectl get nodesAll components should report Healthy and the nodes should be in Ready state.
6. Certificate Management & CSR Approval
Node kubelet certificate signing requests appear via kubectl get csr. Approve them with: kubectl certificate approve <csr-name> After approval, the kubelet obtains its client certificate automatically.
7. Clean‑up of Promotional Content
The tutorial portion ends here; promotional sections about courses, QR codes, and giveaways have been omitted to focus on the technical guide.
Linux Cloud Computing Practice
Welcome to Linux Cloud Computing Practice. We offer high-quality articles on Linux, cloud computing, DevOps, networking and related topics. Dive in and start your Linux cloud computing journey!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
