Cloud Native 26 min read

Step-by-Step Guide to Building a Kubernetes v1.22.1 Cluster with containerd Using kubeadm

This tutorial walks through preparing three CentOS 7.6 nodes, disabling firewalls and SELinux, configuring sysctl and ipvs, installing containerd and its dependencies, generating containerd and kubeadm configurations, initializing the control plane, adding worker nodes, deploying the Flannel CNI plugin and Kubernetes Dashboard, and finally cleaning up the cluster.

DevOps Cloud Academy
DevOps Cloud Academy
DevOps Cloud Academy
Step-by-Step Guide to Building a Kubernetes v1.22.1 Cluster with containerd Using kubeadm

The article begins by assuming you have read the basic usage of containerd and how to switch a Docker‑based Kubernetes cluster to containerd, then proceeds to build a fresh Kubernetes cluster using kubeadm with containerd as the container runtime (v1.22.1).

Environment Preparation

Three CentOS 7.6 nodes are required. Add host entries for master , node1 , and node2 . Ensure hostnames follow DNS standards (avoid using localhost ) and set them with hostnamectl set-hostname .

Disable the firewall and SELinux, enable IPv4 forwarding, load br_netfilter , and create /etc/sysctl.d/k8s.conf with the following settings:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

Apply the changes with sysctl -p /etc/sysctl.d/k8s.conf . Install and load the IPVS kernel modules via an /etc/sysconfig/modules/ipvs.modules script, then verify with lsmod | grep -e ip_vs -e nf_conntrack_ipv4 . Install ipset and ipvsadm for IPVS management, and synchronize the server time using chrony . Finally, disable swap and set vm.swappiness=0 in /etc/sysctl.d/k8s.conf .

Install containerd

Download the cri-containerd-cni-1.5.5-linux-amd64.tar.gz package, extract it to the root filesystem, and add /usr/local/bin and /usr/local/sbin to PATH in ~/.bashrc . Reload the profile, then generate the default containerd configuration:

mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml

Set the cgroup driver to systemd by editing the plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options block and enabling SystemdCgroup = true . Configure a registry mirror under plugins."io.containerd.grpc.v1.cri".registry.mirrors for faster image pulls.

Enable and start containerd with systemd:

systemctl daemon-reload
systemctl enable containerd --now

Verify the installation using the ctr and crictl CLI tools.

Install kubeadm, kubelet, and kubectl

Add the official Kubernetes yum repository (or the Alibaba Cloud mirror if you cannot access the official one) and install version 1.22.1 of kubeadm , kubelet , and kubectl . Enable the kubelet service to start on boot.

Initialize the Cluster

Generate a default kubeadm.yaml configuration with kubeadm config print init-defaults --component-configs KubeletConfiguration , then edit it to set the image repository to registry.aliyuncs.com/k8sxio , enable the ipvs mode for kube‑proxy, and define podSubnet: 10.244.0.0/16 . Pull the required images beforehand with kubeadm config images pull --config kubeadm.yaml .

Run kubeadm init --config kubeadm.yaml to bootstrap the control plane. After successful initialization, copy /etc/kubernetes/admin.conf to $HOME/.kube/config (or set KUBECONFIG ) so that kubectl can communicate with the cluster.

Add Worker Nodes

Copy the admin.conf file to each worker node, install kubeadm , kubelet , and kubectl , then execute the kubeadm join command printed at the end of the init step. If the join command is lost, retrieve it with kubeadm token create --print-join-command .

Deploy a Network Plugin (Flannel)

Download the Flannel manifest, optionally edit the --iface argument for multi‑NIC hosts, and apply it with kubectl apply -f kube-flannel.yml . Verify that the cni0 and flannel1 interfaces appear and that pods obtain IPs from the 10.244.0.0/16 range.

Install Kubernetes Dashboard

Download the recommended Dashboard manifest (v2.3.1), change the Service type to NodePort , and apply it. Create a ServiceAccount and ClusterRoleBinding (admin.yaml) to grant cluster‑admin privileges, then retrieve the token from the created secret and use it to log in to the Dashboard via the NodePort (e.g., 31050 ).

Cleanup

If you need to reset the cluster, run kubeadm reset , remove CNI interfaces ( cni0 , flannel.1 ), and delete /var/lib/cni/ .

After completing all steps, you will have a functional Kubernetes v1.22.1 cluster with containerd as the runtime, IPVS‑enabled kube‑proxy, Flannel CNI networking, and the Kubernetes Dashboard installed.

KubernetesDashboardcontainerdCNICluster Setupflannelkubeadm
DevOps Cloud Academy
Written by

DevOps Cloud Academy

Exploring industry DevOps practices and technical expertise.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.