Step-by-Step Guide to Building a Kubernetes v1.22.1 Cluster with containerd Using kubeadm
This tutorial walks through preparing three CentOS 7.6 nodes, installing and configuring containerd, setting up kubeadm, kubelet, and kubectl, initializing a Kubernetes v1.22.1 control plane, adding worker nodes, deploying the Flannel CNI plugin, installing the Kubernetes Dashboard, and providing cleanup commands, all with detailed commands and configuration files.
Environment Preparation
Three CentOS 7.6 nodes (kernel 3.10.0-1062.4.1.el7.x86_64) are used. Add host entries for master , node1 , and node2 to /etc/hosts and set proper DNS-compliant hostnames using hostnamectl set-hostname . Disable the firewall and SELinux, load the br_netfilter module, and create /etc/sysctl.d/k8s.conf with:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1Apply the settings with sysctl -p /etc/sysctl.d/k8s.conf . Install ipvs modules via a script placed in /etc/sysconfig/modules/ipvs.modules , then verify with lsmod | grep -e ip_vs -e nf_conntrack_ipv4 . Install ipset and ipvsadm for IPVS management, synchronize time with chrony , and disable swap (both runtime and in /etc/fstab ) while setting vm.swappiness=0 .
Install containerd
Download the latest cri-containerd-cni-1.5.5-linux-amd64.tar.gz release, extract it to the root filesystem, and add /usr/local/bin and /usr/local/sbin to PATH in ~/.bashrc :
export PATH=$PATH:/usr/local/bin:/usr/local/sbin
source ~/.bashrcGenerate the default configuration:
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.tomlSet the cgroup driver to systemd by editing the plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options section and adding SystemdCgroup = true . Configure registry mirrors (e.g., Aliyun) in the registry.mirrors block.
Enable and start containerd with systemd:
systemctl daemon-reload
systemctl enable containerd --nowVerify the installation with ctr version and crictl version .
Install kubeadm, kubelet, and kubectl
Add the Kubernetes yum repository (Google or Aliyun mirror) and install version 1.22.1 :
# Google repo
cat <
/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
# Install packages
yum makecache fast
yum install -y kubelet-1.22.1 kubeadm-1.22.1 kubectl-1.22.1 --disableexcludes=kubernetes
systemctl enable --now kubeletInitialize the Cluster
Generate a default kubeadm configuration and customize it (image repository, pod subnet 10.244.0.0/16 , kube-proxy mode ipvs , etc.):
kubeadm config print init-defaults --component-configs KubeletConfiguration > kubeadm.yaml
# Edit kubeadm.yaml to set imageRepository, podSubnet, and other options as needed.Pull required images in advance (handle the missing coredns image by pulling manually and retagging):
kubeadm config images pull --config kubeadm.yaml
# If coredns fails, pull manually:
ctr -n k8s.io i pull docker.io/coredns/coredns:1.8.4
ctr -n k8s.io i tag docker.io/coredns/coredns:1.8.4 registry.aliyuncs.com/k8sxio/coredns:v1.8.4Initialize the control plane:
kubeadm init --config kubeadm.yamlCopy the admin kubeconfig for the regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configVerify the master node:
kubectl get nodesAdd Worker Nodes
Copy $HOME/.kube/config to each worker, install kubeadm/kubelet/kubectl, and run the join command printed by kubeadm init . If the command is lost, retrieve it with:
kubeadm token create --print-join-commandInstall Flannel CNI Plugin
Download the Flannel manifest, adjust the interface name if the node has multiple NICs, and apply:
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# Edit the DaemonSet to set "--iface=eth0" if needed.
kubectl apply -f kube-flannel.ymlAfter a short wait, verify the pods in kube-system are running and the node status changes to Ready .
Deploy Kubernetes Dashboard
Download the recommended Dashboard manifest (v2.3.1), change the Service type to NodePort , and apply:
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
# Edit the Service to add "type: NodePort"
kubectl apply -f recommended.yamlGet the NodePort (e.g., 31050 ) and access the Dashboard via https:// :31050 . Create a ServiceAccount with cluster‑admin rights:
cat <
admin.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: admin
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: admin
namespace: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin
namespace: kubernetes-dashboard
EOF
kubectl apply -f admin.yamlRetrieve the token and use it to log in:
kubectl -n kubernetes-dashboard get secret | grep admin-token
kubectl -n kubernetes-dashboard get secret admin-token-xxxxx -o jsonpath={.data.token} | base64 -dCleanup
If you need to reset the cluster, run:
kubeadm reset
ifconfig cni0 down && ip link delete cni0
ifconfig flannel.1 down && ip link delete flannel.1
rm -rf /var/lib/cni/After completing these steps you will have a functional Kubernetes v1.22.1 cluster using containerd, with CoreDNS, IPVS kube‑proxy, Flannel networking, and the Dashboard UI.
Cloud Native Technology Community
The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.