Cloud Native 12 min read

Step‑by‑Step Rancher Deployment for Multi‑Cluster Kubernetes Management

This guide explains the background of multi‑IDC Kubernetes clusters, why a unified platform like Rancher is needed, and provides detailed step‑by‑step instructions for single‑node, high‑availability RKE, lightweight K3s deployments, Helm installation, cert‑manager setup, ingress configuration, and best‑practice recommendations.

360 Zhihui Cloud Developer
360 Zhihui Cloud Developer
360 Zhihui Cloud Developer
Step‑by‑Step Rancher Deployment for Multi‑Cluster Kubernetes Management

Background

With rapid business growth, the number of Kubernetes clusters in the online environment has increased dramatically, resulting in a coexistence of multiple IDC locations, runtimes, and versions. Examples include different business lines deploying clusters in separate IDC locations (e.g., Beijing, Shanghai), some clusters based on Docker while newer ones use containerd, multiple K8s versions (1.24, 1.26, 1.28) coexisting with inconsistent upgrade strategies, and varied management methods where some clusters are operated by the ops team and others are self‑managed by business teams, leading to high operational complexity, fragmented permissions, and uncontrolled security risks.

Different business lines deploy clusters in different IDC locations (e.g., Beijing, Shanghai).

Some clusters are Docker‑based, while newer clusters use containerd.

Multiple K8s versions (e.g., 1.24, 1.26, 1.28) coexist with no unified upgrade plan.

Management styles vary: some clusters are ops‑managed, others are self‑managed, causing high operational complexity, scattered permissions, and security risks.

In this context, a platform is urgently needed to unify management of these diverse Kubernetes clusters, achieving lifecycle management, centralized permission control, and unified monitoring and auditing. The company chose Rancher as the cluster management platform because it supports multi‑cluster access, offers a user‑friendly UI, provides flexible permission models, and integrates natively with ecosystem tools such as Prometheus, Grafana, and logging, making it a mature solution for multi‑cluster environments.

图片
图片

Rancher Deployment Methods

1. Single‑node deployment (Docker test)

Advantages : Simple, quick to start.

Disadvantages : Single‑point risk, not suitable for production.

1.1 Precautions

Domestic registry can be used:

registry.cn-hangzhou.aliyuncs.com

1.2 Steps

[root@rancher001v ~]# mkdir -p /data/rancher/k3s/agent/images</code><code># Manually obtain /var/lib/rancher/k3s/agent/images/k3s-airgap-images.tar from the image and copy it to the directory</code><code>[root@rancher001v ~]# docker run --rm --entrypoint "" -v $(pwd):/output registry.cn-hangzhou.aliyuncs.com/rancher/rancher:v2.8.5 cp /var/lib/rancher/k3s/agent/images/k3s-airgap-images.tar /output/k3s-airgap-images.tar</code><code>[root@rancher001v ~]# ls</code><code>k3s-airgap-images.tar</code><code># View file contents without extracting</code><code>[root@rancher001v ~]# tar -O -xf ./k3s-airgap-images.tar manifest.json | jq</code><code>[root@rancher001v ~]# tar -O -xf ./k3s-airgap-images.tar repositories | jq</code><code>[root@rancher001v ~]# cp k3s-airgap-images.tar /data/rancher/k3s/agent/images/</code><code># Start container using Alibaba Cloud registry</code><code>[root@rancher001v ~]# docker run -d --restart=unless-stopped --privileged \
    -p 80:80 -p 443:443 \
    -e CATTLE_SYSTEM_DEFAULT_REGISTRY=registry.cn-hangzhou.aliyuncs.com \
    -e CATTLE_BOOTSTRAP_PASSWORD=rancher \
    -v /data/rancher:/var/lib/rancher \
    --name rancher \
    registry.cn-hangzhou.aliyuncs.com/rancher/rancher:v2.8.5</code><code># Enter container</code><code>[root@rancher001v ~]# docker exec -it rancher sh</code><code># Check K3s status</code><code>sh-4.4# kubectl get nodes</code><code>sh-4.4# kubectl get pods -A</code><code>sh-4.4# kubectl get pod -A -o json | grep -w "image" | sort | uniq</code><code># List ctr images</code><code>sh-4.4# ctr images ls

2. High‑availability local cluster deployment

2.1 RKE deployment (production)

RKE (Rancher Kubernetes Engine) is Rancher’s official K8s installer.

Precautions:

RKE version and Rancher version have dependencies; plan the versions in advance (RKE binary version, Rancher version, K8s version).

Prepare a private registry address and configure it in cluster.yml.

# Set system_images and private_registries; system_images will be used first.</code><code># If using private_registries, remove system_images entries.</code><code>private_registries:</code><code>  - is_default: true</code><code>    url: mirror.k8s.qihoo.net/docker

Steps:

Prepare three nodes as Rancher control plane and etcd nodes, plan hostnames/IPs.

Install the RKE binary.

Prepare cluster.yml and define node roles.

Run rke up to deploy the HA cluster.

Install Rancher via Helm.

2.2 K3s lightweight local cluster

# Domestic installation steps</code><code>curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | \
  INSTALL_K3S_MIRROR=cn \
  K3S_TOKEN=12345 sh -s - \
  --system-default-registry=registry.cn-hangzhou.aliyuncs.com

2.3 Rancher installation

1. Install Helm

# Helm and K8s version compatibility</code><code>wget https://get.helm.sh/helm-v3.15.3-linux-amd64.tar.gz</code><code>tar zxvf helm-v3.15.3-linux-amd64.tar.gz</code><code>mv linux-amd64/helm /usr/local/bin/</code><code>helm version

2. Deploy cert‑manager and configure Rancher

# Install cert‑manager CRDs</code><code>kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/<VERSION>/cert-manager.crds.yaml</code><code># Install via Helm</code><code>helm repo add jetstack https://charts.jetstack.io</code><code>helm repo update</code><code>helm install cert-manager jetstack/cert-manager \</code><code>  --namespace cert-manager \</code><code>  --create-namespace

3. Install Rancher via Helm

# Add Rancher repo</code><code>helm repo add rancher-latest https://releases.rancher.com/server-charts/latest</code><code>kubectl create namespace cattle-system</code><code># Install Rancher</code><code>helm install rancher rancher-latest/rancher --version 2.10.0 \</code><code>  --namespace cattle-system \</code><code>  --set hostname=rancher.test.com \</code><code>  --set replicas=1 \</code><code>  --set bootstrapPassword=rancher \</code><code>  --dry-run=client > rancher-v2.10.0.yaml

3. Ingress configuration

# Get ingress</code><code>kubectl get ing</code><code># Example output</code><code>NAME    CLASS   HOSTS            ADDRESS                     PORTS   AGE</code><code>rancher nginx   rancher.test.com 11.123.251.199,11.123.255.17 80,443 49m</code><code># Test ingress</code><code>curl -k -H "Host: rancher.test.com" https://11.123.251.199:30443

Summary and Recommendations

Rancher is an ideal central platform for managing multiple clusters; unify access and gradually replace fragmented management methods.

For production, use RKE + Helm high‑availability deployment to ensure system stability.

Back up users and tokens before upgrades to avoid compatibility‑induced access failures.

When using containerd runtimes, pay special attention to namespace and image name matching for image pull and tag operations.

Establish a unified kubeconfig generation strategy; avoid manual ServiceAccount binding for cluster access.

Kubernetescluster managementHelmrancherHA deploymentRKE
360 Zhihui Cloud Developer
Written by

360 Zhihui Cloud Developer

360 Zhihui Cloud is an enterprise open service platform that aims to "aggregate data value and empower an intelligent future," leveraging 360's extensive product and technology resources to deliver platform services to customers.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.