Cloud Native 44 min read

Step‑by‑Step Deployment of a Highly Available Kubernetes Cluster with Nginx/Keepalived Load Balancer, Flannel CNI, IPVS, Dashboard, and Harbor Registry

This comprehensive guide walks you through installing Docker and containerd, configuring yum repositories, setting up kubeadm/kubelet/kubectl, initializing a multi‑master Kubernetes cluster, enabling Flannel CNI and IPVS, building a Nginx‑Keepalived HA load balancer, deploying the Kubernetes dashboard, configuring NFS storage with a dynamic provisioner, and installing a secure Harbor image registry for private images.

Top Architect
Top Architect
Top Architect
Step‑by‑Step Deployment of a Highly Available Kubernetes Cluster with Nginx/Keepalived Load Balancer, Flannel CNI, IPVS, Dashboard, and Harbor Registry

This article provides a complete, hands‑on tutorial for building a production‑grade Kubernetes environment from scratch on CentOS nodes.

1. Prerequisites and Host Configuration

Set hostnames and update /etc/hosts on all nodes (1 master, 2 workers, optional second master for HA).

# Example hostname configuration
hostnamectl set-hostname k8s-master-168-0-113
hostnamectl set-hostname k8s-node1-168-0-114
hostnamectl set-hostname k8s-node2-168-0-115

2. Install Docker (containerd) and Configure Mirrors

# Install Docker CE
yum install -y yum-utils
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce
systemctl start docker && systemctl enable docker

# Configure Docker daemon to use Alibaba mirror
cat >/etc/docker/daemon.json <

3. Install Kubernetes Packages (kubeadm, kubelet, kubectl)

# Install specific version 1.24.1
yum install -y kubelet-1.24.1 kubeadm-1.24.1 kubectl-1.24.1 --disableexcludes=kubernetes
systemctl enable --now kubelet

4. System Tuning (Swap, SELinux, Firewall, Time Sync, IPVS)

# Disable swap
swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab
# Disable SELinux temporarily and permanently
setenforce 0 && sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
# Load br_netfilter and configure sysctl for bridge traffic
modprobe br_netfilter
cat <
/etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system
# Install IPVS tools
yum install -y ipset ipvsadm

5. Initialize the Kubernetes Cluster

# Initialize master (replace IPs as needed)
kubeadm init \
  --apiserver-advertise-address=192.168.0.113 \
  --image-repository registry.aliyuncs.com/google_containers \
  --control-plane-endpoint=cluster-endpoint \
  --kubernetes-version v1.24.1 \
  --service-cidr=10.1.0.0/16 \
  --pod-network-cidr=10.244.0.0/16 \
  --v=5

# Set up kubectl for the regular user
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

6. Install Flannel CNI

# Pull and apply Flannel manifest
docker pull quay.io/coreos/flannel:v0.14.0
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

7. Join Worker Nodes

# On each worker node
kubeadm join 192.168.0.113:6443 --token
\
  --discovery-token-ca-cert-hash sha256:

8. Configure IPVS Load Balancing for Services

# Load IPVS kernel modules
modprobe ip_vs && modprobe ip_vs_rr && modprobe ip_vs_wrr && modprobe ip_vs_sh
# Edit kube-proxy ConfigMap to use IPVS mode
kubectl edit cm -n kube-system kube-proxy
# Set mode: ipvs

9. High‑Availability Load Balancer (Nginx + Keepalived)

Deploy Nginx in NodePort mode and configure Keepalived on both master nodes to provide a virtual IP (VIP) for the API server.

# Example Keepalived configuration (master)
cat >/etc/keepalived/keepalived.conf <

10. Deploy Kubernetes Dashboard (NodePort)

# Download and modify dashboard manifest to use NodePort 31443
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml
sed -i 's/type: ClusterIP/type: NodePort/' recommended.yaml
sed -i 's/nodePort: .*/nodePort: 31443/' recommended.yaml
kubectl apply -f recommended.yaml

# Create admin ServiceAccount and bind cluster‑admin role
cat >ServiceAccount.yaml <

11. Install NFS Server and Dynamic Provisioner

# On the designated NFS server (VIP 192.168.0.120)
mkdir -p /opt/nfsdata && chmod 777 /opt/nfsdata
cat >/etc/exports <
> /etc/fstab
mount -a

Deploy the nfs-subdir-external-provisioner via Helm using the nfs-client StorageClass created above.

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
  --namespace nfs-provisioner --create-namespace \
  --set image.repository=willdockerhub/nfs-subdir-external-provisioner \
  --set image.tag=v4.0.2 \
  --set replicaCount=2 \
  --set storageClass.name=nfs-client \
  --set storageClass.defaultClass=true \
  --set nfs.server=192.168.0.120 \
  --set nfs.path=/opt/nfsdata

12. Install Harbor Private Registry (HTTPS)

# Create namespace
kubectl create ns harbor

# Create TLS secret (use the generated myharbor.com.crt/key)
kubectl create secret tls myharbor.com --key myharbor.com.key --cert myharbor.com.crt -n harbor

# Add Harbor Helm repo and install
helm repo add harbor https://helm.goharbor.io
helm install myharbor harbor/harbor \
  --namespace harbor \
  --set expose.ingress.hosts.core=myharbor.com \
  --set expose.ingress.hosts.notary=notary.myharbor.com \
  --set-string expose.ingress.annotations.'nginx\.org/client-max-body-size'="1024m" \
  --set expose.tls.secretName=myharbor.com \
  --set persistence.enabled=true \
  --set persistence.persistentVolumeClaim.registry.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.jobservice.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.database.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.redis.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.trivy.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.chartmuseum.storageClass=nfs-client \
  --set externalURL=https://myharbor.com \
  --set harborAdminPassword=Harbor12345

13. Configure Containerd to Pull Images from Harbor

# Directory for Harbor CA
mkdir -p /etc/containerd/myharbor.com
cp ca.crt /etc/containerd/myharbor.com/

# Append to /etc/containerd/config.toml
cat >>/etc/containerd/config.toml <

14. Verify the Installation

# Check all pods
kubectl get pods -A
# Verify dashboard access via https://cluster-endpoint:16443/version
# Verify Harbor at https://myharbor.com (admin/Harbor12345)
# Pull an image from Harbor using crictl
crictl pull myharbor.com/bigdata/mysql:5.7.38

Following these steps results in a fully functional, highly available Kubernetes cluster with a load‑balanced API server, a web UI dashboard, NFS‑backed persistent storage, and a private Harbor registry ready for production workloads.

high availabilityKubernetesnginxNFSHarborflannelkeepalived
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.