Cloud Native 23 min read

How to Build a Small‑Scale KubeSphere Kubernetes Cluster: A Step‑by‑Step Guide

This guide walks you through planning, deploying, and configuring a production‑grade KubeSphere‑based Kubernetes cluster for small environments, covering node layout, storage choices, middleware setup, OS and Docker preparation, HAProxy/Keepalived high‑availability, and KubeKey installation with verification steps.

Qingyun Technology Community
Qingyun Technology Community
Qingyun Technology Community
How to Build a Small‑Scale KubeSphere Kubernetes Cluster: A Step‑by‑Step Guide

Prerequisite

This series targets small‑scale (≤50) K8s production environments; larger setups need validation.

All nodes run as cloud VMs.

K8s security hardening is not covered; high‑security environments should add it later.

The documentation evolves with real‑world issues.

KubeSphere is used as the base platform.

Related Ansible scripts are available at https://gitee.com/zdevops/cloudnative.

KubeSphere Overview

Full‑stack K8s Container Cloud PaaS

KubeSphere builds on Kubernetes to provide a multi‑tenant, application‑centric platform with end‑to‑end DevOps automation, a wizard‑style UI, and plug‑in modular components.

Fully open source

Easy installation

Rich features

Modular & pluggable

Selection Reasons (Operations Perspective)

Simple installation and usage.

One‑stop enterprise DevOps and visual operations.

Centralized logging, monitoring, events, audit, alerts, and notifications with multi‑tenant isolation.

Simplifies CI, testing, review, release, upgrade, and elastic scaling.

Provides gray‑release, traffic management, and service mesh for cloud‑native apps.

Offers both CLI and graphical interfaces for diverse operator habits.

Easy decoupling to avoid vendor lock‑in.

Architecture Diagram

Architecture Diagram
Architecture Diagram

Node Planning

Software Versions

OS: centos7.9

KubeSphere: v3.1.1 (latest v3.2.1)

KubeKey: v1.1.1

K8s: v1.20.4

Docker: v19.03.15

Cluster Layout

2 HAProxy nodes with Keepalived for high availability.

3 Master nodes (etcd and control plane components).

6 Worker nodes (application workloads, count adjustable).

Storage Cluster

3 nodes running GlusterFS, each with 1 TB data disk.

Middleware Cluster

Nginx proxy nodes (HA with Keepalived, no Ingress).

MySQL primary‑secondary setup.

GitLab for GitOps.

Harbor image registry.

Elasticsearch (3 nodes) for logs.

Prometheus for monitoring.

Redis (3‑node sentinel) and RocketMQ (3‑node) clusters, currently on K8s with placeholders for future bare‑metal deployment.

Network Planning

K8s cluster: 192.168.9.0/24

Storage cluster: 192.168.10.0/24

Middleware cluster: 192.168.11.0/24

Storage Selection

GlusterFS chosen for simplicity and HA.

Other candidates (Ceph, NFS, MinIO, Longhorn) listed with pros/cons.

K8s Server Basic Configuration

OS Base Setup (All Master & Worker Nodes)

Disable firewall and SELinux:

# systemctl stop firewalld && systemctl disable firewalld
# sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

Set hostname: hostnamectl set-hostname <your-hostname> Mount data disk (example for /dev/vdb1):

# lsblk
# fdisk /dev/vdb
# mkfs.ext4 /dev/vdb1
# mkdir /data
# mount /dev/vdb1 /data
# echo '/dev/vdb1 /data ext4 defaults 0 0' >> /etc/fstab

Update OS and reboot:

# yum update
# reboot

Install required packages:

# yum install socat conntrack ebtables ipset

Basic Security

Baseline hardening should follow your organization’s scanning reports; scripts can be added later.

Docker Installation

Configure Docker yum repo (example using Tsinghua mirror):

# vi /etc/yum.repos.d/docker.repo
[docker-ce-stable]
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/$releasever/$basearch/stable
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg
enabled=1

Create daemon.json:

{
  "data-root": "/data/docker",
  "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"],
  "log-opts": {"max-size": "5m", "max-file": "3"},
  "exec-opts": ["native.cgroupdriver=systemd"]
}

Install Docker 19.03.15:

# yum install docker-ce-19.03.15-3.el7 docker-ce-cli-19.03.15-3.el7 -y

Enable and start Docker:

# systemctl restart docker.service && systemctl enable docker.service

Verify installation:

# docker version

Load Balancer Installation

Three Options

Cloud provider’s ELB.

Self‑built HAProxy/Nginx (chosen).

KubeSphere’s built‑in HAProxy (available from KubeKey v1.2.1).

HAProxy Setup (All LB Nodes)

Install packages: # yum install haproxy keepalived Edit /etc/haproxy/haproxy.cfg (example snippet):

global
    log /dev/log local0 warning
    chroot /var/lib/haproxy
    pidfile /var/run/haproxy.pid
    maxconn 4000
    user haproxy
    group haproxy
    daemon

defaults
    log global
    option httplog
    option dontlognull
    timeout connect 5000
    timeout client 50000
    timeout server 50000

frontend kube-apiserver
    bind *:6443
    mode tcp
    default_backend kube-apiserver

backend kube-apiserver
    mode tcp
    balance roundrobin
    server kube-apiserver-1 192.168.9.4:6443 check
    server kube-apiserver-2 192.168.9.5:6443 check
    server kube-apiserver-3 192.168.9.6:6443 check

Start HAProxy and enable on boot:

# systemctl restart haproxy && systemctl enable haproxy

Keepalived Setup (All LB Nodes)

Edit /etc/keepalived/keepalived.conf (Master example):

global_defs {
    router_id LVS_DEVEL
    vrrp_skip_check_adv_addr
    vrrp_garp_interval 0
    vrrp_gna_interval 0
}

vrrp_script chk_haproxy {
    script "killall -0 haproxy"
    interval 2
    weight 2
}

vrrp_instance haproxy-vip {
    state MASTER
    priority 100
    interface eth0
    virtual_router_id 60
    advert_int 1
    authentication { auth_type PASS auth_pass 1111 }
    unicast_src_ip 192.168.9.2
    unicast_peer { 192.168.9.3 }
    virtual_ipaddress { 192.168.9.1/24 }
    track_script { chk_haproxy }
}

Start Keepalived and enable on boot:

# systemctl restart keepalived && systemctl enable keepalived

Verification

Check VIP on LB nodes: ip a s shows 192.168.9.1.

Ping VIP from other nodes to confirm connectivity.

KubeSphere Installation with KubeKey

Download KubeKey (set KKZONE=cn for China mirrors):

# export KKZONE=cn
# curl -sfL https://get-kk.kubesphere.io | VERSION=v1.1.1 sh -

Create a sample config file:

# ./kk create config --with-kubesphere v3.1.1 --with-kubernetes v1.20.4

Edit config-sample.yaml to match the planned hosts, roleGroups, controlPlaneEndpoint, network, and other settings (example snippets omitted for brevity).

Deploy the cluster: # ./kk create cluster -f config-sample.yaml Verify installation:

# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

References

KubeSphere Introduction: https://kubesphere.io/zh/

Internal HAProxy with KubeKey: https://kubesphere.io/zh/docs/installing-on-linux/high-availability-configurations/internal-ha-configuration/

Multi‑node Installation Guide: https://kubesphere.io/zh/docs/installing-on-linux/introduction/multioverview/

Keepalived + HAProxy HA Cluster: https://kubesphere.io/zh/docs/installing-on-linux/high-availability-configurations/set-up-ha-cluster-using-keepalived-haproxy/

Next Steps

The next article will cover persistent storage with GlusterFS in a KubeSphere production environment.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

DockerKubernetesHAProxyKeepalivedKubeSphere
Qingyun Technology Community
Written by

Qingyun Technology Community

Official account of the Qingyun Technology Community, focusing on tech innovation, supporting developers, and sharing knowledge. Born to Learn and Share!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.