Cloud Native 6 min read

Managing Multiple Kubernetes Clusters on One Node: A Step‑by‑Step Guide

This guide explains how to run and switch between several Kubernetes clusters on a single server by configuring separate kubeconfig entries, adding clusters, users, and contexts, and using kubectl commands to manage each cluster efficiently.

Full-Stack DevOps & Kubernetes
Full-Stack DevOps & Kubernetes
Full-Stack DevOps & Kubernetes
Managing Multiple Kubernetes Clusters on One Node: A Step‑by‑Step Guide

Why run multiple Kubernetes clusters on a single node?

In many production environments different teams, environments (dev / test / prod) or workloads require isolated Kubernetes clusters. Maintaining separate physical machines for each cluster wastes resources and adds operational overhead. By configuring several clusters on the same host you can switch contexts quickly, keep the clusters isolated at the API level, and stay aligned with DevOps practices that favour automation and resource efficiency.

1) Simulated clusters

Two independent clusters are created on the same physical host. Their control‑plane and worker nodes are reachable on distinct IP addresses.

# Cluster 1 – k8smaster
kubectl get nodes
NAME        STATUS   ROLES        VERSION
k8smaster   Ready    controlplane v1.23.1
k8slave     Ready    worker       v1.23.1

# IPs
k8smaster: 192.168.40.180
k8slave:   192.168.40.181

# Cluster 2 – k8smaster2
kubectl get nodes
NAME         STATUS   ROLES        VERSION
k8smaster2   Ready    controlplane v1.23.1
k8slave2     Ready    worker       v1.23.1

# IPs
k8smaster2: 192.168.40.185
k8slave2:   192.168.40.186

2) Inspecting the kubeconfig files

The kubeconfig that kubectl uses is stored by default at /root/.kube/config. You can view the current configuration with kubectl config view. Below are trimmed excerpts that illustrate the structure for each cluster.

# kubeconfig for cluster 1 (k8smaster)
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <em>DATA+OMITTED</em>
    server: https://192.168.40.180:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: <em>REDACTED</em>
    client-key-data: <em>REDACTED</em>

# kubeconfig for cluster 2 (k8smaster2)
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <em>DATA+OMITTED</em>
    server: https://192.168.40.185:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: <em>REDACTED</em>
    client-key-data: <em>REDACTED</em>

3) Adding the second cluster to the first node’s configuration

Run the following commands on the k8smaster host. They create a new cluster entry, a credential (user) entry, a context that ties the two together, and finally switch the active context.

Add the cluster definition

kubectl config set-cluster k8smaster2 \
  --server=https://192.168.40.185:6443 \
  --insecure-skip-tls-verify=true

Create a user (credential) entry If you already have a token for the second cluster you can use it directly. Tokens can be generated on the control‑plane with kubeadm token create --print-join-command or by creating a ServiceAccount and extracting its secret.

kubectl config set-credentials k8smaster2-user \
  --token=clknqa.km25oi82urcuja9u

Define a context that binds the cluster and user

kubectl config set-context k8smaster2-context \
  --cluster=k8smaster2 \
  --user=k8smaster2-user

Switch to the new context kubectl config use-context k8smaster2-context After executing these steps the k8smaster node can manage both k8smaster and k8smaster2 clusters. The same pattern (set‑cluster, set‑credentials, set‑context, use‑context) can be repeated to add any number of additional clusters, each isolated by its own context.

KubernetesMulti-Clustercontextkubeconfig
Full-Stack DevOps & Kubernetes
Written by

Full-Stack DevOps & Kubernetes

Focused on sharing DevOps, Kubernetes, Linux, Docker, Istio, microservices, Spring Cloud, Python, Go, databases, Nginx, Tomcat, cloud computing, and related technologies.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.