Cloud Native 25 min read

Deploy a High‑Availability k0s Kubernetes Cluster with k0sctl

This guide explains how to install and configure k0s, a lightweight Kubernetes distribution, using k0sctl for both standard and high‑availability clusters, covering binary deployment, offline image handling, custom CNI integration, HA load‑balancer setup, certificate management, backup, and advanced features such as etcd replacement and user management.

Open Source Linux
Open Source Linux
Open Source Linux
Deploy a High‑Availability k0s Kubernetes Cluster with k0sctl

1. k0s Overview

k0s is described as "The Simple, Solid & Certified Kubernetes Distribution". It is a downstream Kubernetes distribution that retains almost all native Kubernetes functionality, only omitting the cloud‑provider integration.

k0s compiles the Kubernetes source code into binaries and runs them directly on the host, so its behavior is virtually identical to upstream Kubernetes.

2. Using k0sctl

k0sctl is a tool provided by the k0s project to simplify and accelerate cluster deployment. It works similarly to kubeadm but offers far greater extensibility. In a multi‑node scenario, k0sctl connects to target hosts via SSH, uploads required files, and starts the necessary Kubernetes services to initialise the cluster.

2.1 Install a cluster

First, install k0sctl:

# Install k0sctl
wget https://github.com/k0sproject/k0sctl/releases/download/v0.9.0/k0sctl-linux-x64
chmod +x k0sctl-linux-x64
mv k0sctl-linux-x64 /usr/local/bin/k0sctl

Then create a k0sctl.yaml configuration file. A minimal example:

apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  hosts:
  - ssh:
      address: 10.0.0.11
      user: root
      port: 22
      keyPath: /Users/bleem/.ssh/id_rsa
    role: controller+worker
  - ssh:
      address: 10.0.0.12
      user: root
      port: 22
      keyPath: /Users/bleem/.ssh/id_rsa
    role: controller+worker
  - ssh:
      address: 10.0.0.13
      user: root
      port: 22
      keyPath: /Users/bleem/.ssh/id_rsa
    role: controller+worker
  - ssh:
      address: 10.0.0.14
      user: root
      port: 22
      keyPath: /Users/bleem/.ssh/id_rsa
    role: worker
  - ssh:
      address: 10.0.0.15
      user: root
      port: 22
      keyPath: /Users/bleem/.ssh/id_rsa
    role: worker
  k0s:
    version: 1.21.2+k0s.1
    config:
      apiVersion: k0s.k0sproject.io/v1beta1
      kind: Cluster
      metadata:
        name: k0s
      spec:
        api:
          address: 10.0.0.11
          port: 6443
          k0sApiPort: 9443
          sans:
          - 10.0.0.11
          - 10.0.0.12
          - 10.0.0.13
        storage:
          type: etcd
        network:
          kubeProxy:
            disabled: false
            mode: ipvs

Run the apply command (ensure password‑less SSH access to all hosts): k0sctl apply -c k0sctl.yaml After a short wait, a cluster with three masters and two workers will be ready. Example node list:

NAME      STATUS   ROLES   AGE   VERSION          INTERNAL-IP   OS-IMAGE            KERNEL-VERSION   CONTAINER-RUNTIME
k1.node   Ready    <none>  10m   v1.21.2+k0s      10.0.0.11     Ubuntu 20.04.2 LTS  5.4.0-77-generic  containerd://1.4.6
k2.node   Ready    <none>  10m   v1.21.2+k0s      10.0.0.12     Ubuntu 20.04.2 LTS  5.4.0-77-generic  containerd://1.4.6
k3.node   Ready    <none>  10m   v1.21.2+k0s      10.0.0.13     Ubuntu 20.04.2 LTS  5.4.0-77-generic  containerd://1.4.6
k4.node   Ready    <none>  10m   v1.21.2+k0s      10.0.0.14     Ubuntu 20.04.2 LTS  5.4.0-77-generic  containerd://1.4.6
k5.node   Ready    <none>  10m   v1.21.2+k0s      10.0.0.15     Ubuntu 20.04.2 LTS  5.4.0-77-generic  containerd://1.4.6

2.2 Extension mechanisms

File upload – k0sctl can upload arbitrary files (binaries, offline image bundles, scripts, etc.) to target hosts before installation.

Manifests & Helm – Files placed in /var/lib/k0s/manifests on a master are automatically applied as static pods, supporting Deployments, DaemonSets, Namespaces and more. Helm charts can also be defined directly in the yaml.

Hooks – Custom scripts can be executed on each host via the hooks option (requires reading the source code).

2.3 Offline image bundles

k0s can automatically import an offline image bundle placed in /var/lib/k0s/images/ into containerd, eliminating manual image loading.

apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  hosts:
  - ssh:
      address: 10.0.0.11
      user: root
      port: 22
      keyPath: /Users/bleem/.ssh/id_rsa
    role: controller+worker
    files:
    - name: image-bundle
      src: /Users/bleem/tmp/bundle_file
      dstDir: /var/lib/k0s/images/
      perm: 0755

2.4 Switching CNI plugins

k0s bundles Calico and kube‑router by default. To use another CNI such as Flannel, set the provider to custom and upload the Flannel manifest to /var/lib/k0s/manifests. The CNI binaries must also be uploaded manually.

apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  hosts:
  - ssh:
      address: 10.0.0.11
      user: root
      port: 22
      keyPath: /Users/bleem/.ssh/id_rsa
    role: controller+worker
    files:
    - name: flannel
      src: /Users/bleem/tmp/kube-flannel.yaml
      dstDir: /var/lib/k0s/manifests/flannel
      perm: 0644
    - name: cni-plugins
      src: /Users/bleem/tmp/cni-plugins/*
      dstDir: /opt/cni/bin/
      perm: 0755
  k0s:
    version: v1.21.2+k0s.1
    config:
      apiVersion: k0s.k0sproject.io/v1beta1
      kind: Cluster
      metadata:
        name: k0s
      spec:
        network:
          provider: custom

2.5 Uploading the k0s binary

In offline environments the k0s binary can be uploaded instead of being downloaded on the target hosts:

apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  hosts:
  - ssh:
      address: 10.0.0.11
      user: root
      port: 22
      keyPath: /Users/bleem/.ssh/id_rsa
    role: controller+worker
    uploadBinary: true
    k0sBinaryPath: /Users/bleem/tmp/k0s

2.6 Customising component images

Component images (e.g., kube‑proxy) can be overridden by specifying images entries with custom image names and versions.

k0s:
  version: v1.21.2+k0s.1
  config:
    spec:
      kubeproxy:
        image: k8s.gcr.io/kube-proxy
        version: v1.21.3
      images:
        default_pull_policy: IfNotPresent

2.7 Adjusting master component arguments

spec.api.extraArgs

– custom arguments for the API server. spec.scheduler.extraArgs – custom arguments for the scheduler. spec.controllerManager.extraArgs – custom arguments for the controller‑manager. spec.workerProfiles – overrides for kubelet-config.yaml.

3. Building a HA k0s cluster

HA is achieved by placing an external Layer‑4 load balancer in front of the masters. The load balancer must forward the following ports to all master nodes: 6443 (API), 9443 (controller join API), 8132 (Konnectivity agent), and 8133 (Konnectivity server).

3.1 Example Nginx stream configuration

error_log syslog:server=unix:/log notice;
worker_processes auto;
stream {
    upstream kube_apiserver {
        least_conn;
        server 10.0.0.11:6443;
        server 10.0.0.12:6443;
        server 10.0.0.13:6443;
    }
    upstream konnectivity_agent {
        least_conn;
        server 10.0.0.11:8132;
        server 10.0.0.12:8132;
        server 10.0.0.13:8132;
    }
    upstream konnectivity_server {
        least_conn;
        server 10.0.0.11:8133;
        server 10.0.0.12:8133;
        server 10.0.0.13:8133;
    }
    upstream controller_join_api {
        least_conn;
        server 10.0.0.11:9443;
        server 10.0.0.12:9443;
        server 10.0.0.13:9443;
    }
    server {
        listen 0.0.0.0:6443;
        proxy_pass kube_apiserver;
        proxy_timeout 10m;
        proxy_connect_timeout 1s;
    }
    server {
        listen 0.0.0.0:8132;
        proxy_pass konnectivity_agent;
        proxy_timeout 10m;
        proxy_connect_timeout 1s;
    }
    server {
        listen 0.0.0.0:8133;
        proxy_pass konnectivity_server;
        proxy_timeout 10m;
        proxy_connect_timeout 1s;
    }
    server {
        listen 0.0.0.0:9443;
        proxy_pass controller_join_api;
        proxy_timeout 10m;
        proxy_connect_timeout 1s;
    }
}

3.2 HA cluster configuration example

apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  hosts:
  - ssh:
      address: 10.0.0.11
      user: root
      port: 22
      keyPath: /Users/bleem/.ssh/id_rsa
    role: controller+worker
    uploadBinary: true
    k0sBinaryPath: /Users/bleem/tmp/k0s
    files:
    - name: flannel
      src: /Users/bleem/tmp/kube-flannel.yaml
      dstDir: /var/lib/k0s/manifests/flannel
      perm: 0644
    - name: image-bundle
      src: /Users/bleem/tmp/bundle_file
      dstDir: /var/lib/k0s/images/
      perm: 0755
    - name: cni-plugins
      src: /Users/bleem/tmp/cni-plugins/*
      dstDir: /opt/cni/bin/
      perm: 0755
  - ssh:
      address: 10.0.0.12
      user: root
      port: 22
      keyPath: /Users/bleem/.ssh/id_rsa
    role: controller+worker
    uploadBinary: true
    k0sBinaryPath: /Users/bleem/tmp/k0s
    files:
    - name: image-bundle
      src: /Users/bleem/tmp/bundle_file
      dstDir: /var/lib/k0s/images/
      perm: 0755
    - name: cni-plugins
      src: /Users/bleem/tmp/cni-plugins/*
      dstDir: /opt/cni/bin/
      perm: 0755
  # (additional worker nodes omitted for brevity)
  k0s:
    version: v1.21.2+k0s.1
    config:
      apiVersion: k0s.k0sproject.io/v1beta1
      kind: Cluster
      metadata:
        name: k0s
      spec:
        api:
          externalAddress: 10.0.0.20
          sans:
          - 10.0.0.11
          - 10.0.0.12
          - 10.0.0.13
          - 10.0.0.20
        storage:
          type: etcd
        network:
          podCIDR: 10.244.0.0/16
          serviceCIDR: 10.96.0.0/12
          provider: custom
          kubeProxy:
            disabled: false
            mode: ipvs
        telemetry:
          enabled: false
        images:
          default_pull_policy: IfNotPresent

Execute k0sctl apply -c k0sctl.yaml and wait a few minutes; the cluster will be ready.

3.3 Certificate renewal

k0s uses a 10‑year CA by default. When the one‑year node certificates approach expiry, simply restart the k0scontroller.service on each master to regenerate them.

4. Backup and restore

Run k0sctl backup to create a k0s_backup_TIMESTAMP.tar.gz file in the current directory. To restore, use k0sctl apply --restore-from k0s_backup_TIMESTAMP.tar.gz. Restoration performs a fresh installation, so use with caution; Velero is recommended for production‑grade backups.

5. Advanced features

5.1 etcd replacement

For small clusters you can replace etcd with the lightweight kine backend (SQLite, MySQL, etc.) by adjusting the storage type:

apiVersion: k0s.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s
spec:
  storage:
    type: kine
    kine:
      dataSource: "sqlite:///var/lib/k0s/db/state.db?more=rwc&_journal=WAL&cache=shared"

5.2 Cluster user management

Create a new user with admin privileges:

k0s kubeconfig create --groups "system:masters" testUser > k0s.config

5.3 Containerd configuration

Custom Containerd settings can be supplied by uploading a containerd.toml file to /etc/k0s/containerd.toml before installation.

6. Summary

k0s provides a convenient binary‑based method for deploying Kubernetes clusters. Its close alignment with upstream Kubernetes, combined with the extensible k0sctl tool, makes it suitable for both simple and production‑grade HA setups, though some components (e.g., Konnectivity) still lack fine‑grained toggles.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

KubernetesCNIHAoffline deploymentk0sk0sctl
Open Source Linux
Written by

Open Source Linux

Focused on sharing Linux/Unix content, covering fundamentals, system development, network programming, automation/operations, cloud computing, and related professional knowledge.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.