Cloud Native 18 min read

How to Build a Multi‑Cloud k3s Cluster with WireGuard and Kilo CNI

This guide walks you through using WireGuard for both networking and encryption in cloud‑native Kubernetes, showing how to configure Kilo CNI to create a multi‑cloud k3s mesh across AWS, Azure, GCP and Alibaba Cloud, and how to connect local clients to the cluster.

Programmer DD
Programmer DD
Programmer DD
How to Build a Multi‑Cloud k3s Cluster with WireGuard and Kilo CNI

This article provides a comprehensive guide to using WireGuard in cloud‑native environments, focusing on integrating WireGuard with Kubernetes via the Kilo CNI to build a multi‑cloud k3s cluster.

If you are new to WireGuard, read the following articles in order:

WireGuard Tutorial: How WireGuard Works

WireGuard Quick Installation Guide

WireGuard Configuration with wg‑gen‑web

WireGuard Full Mesh Configuration Guide

Additional reference:

WireGuard Deployment and Configuration Details

WireGuard has two main uses in cloud‑native scenarios: networking (CNI) and encryption. The CNI projects that can leverage WireGuard are:

Flannel (networking)

Wormhole (networking)

Kilo (networking)

Calico (encryption only)

To create a k3s cluster spanning AWS, Azure, GCP and Alibaba Cloud, you need to connect the nodes with WireGuard. The process consists of two steps: first, establish the container network between k3s nodes; second, bridge the local network to the cloud nodes.

1. Kilo Network Topology

Kilo supports three topologies:

Logical Groups Interconnect Mode

By default, Kilo creates a mesh between logical regions (e.g., different cloud providers) using the node label topology.kubernetes.io/region . You can override the label with the --topology-label=<label> flag or annotate nodes with kilo.squat.ai/location.

Example: annotate all GCP nodes with the label "gcp":

for node in $(kubectl get nodes | grep -i gcp | awk '{print $1}'); do
  kubectl annotate node $node kilo.squat.ai/location="gcp"
done

Each logical region elects a leader; leaders establish WireGuard tunnels, while nodes within a region connect via a bridge.

Full Mesh Mode

Every node establishes a WireGuard tunnel with every other node. Enable it with the Kilo flag --mesh-granularity=full.

Hybrid Mode

Combine logical groups and full mesh: group cloud nodes (e.g., GCP) together and connect bare‑metal nodes in full mesh.

Generate topology diagrams with kgctl graph | circo -Tsvg > cluster.svg:

2. Deploying Kilo

On domestic cloud hosts, the default CNI cannot use bridge mode, so you must use full mesh. Clone the Kilo repository and edit the DaemonSet manifest:

git clone https://github.com/squat/kilo
cd kilo/manifests
# edit kilo‑k3s.yaml and add:
#   --encapsulate=never
#   --mesh-granularity=full

Key flags: --encapsulate=never disables IPIP encryption within a logical region. --mesh-granularity=full enables full mesh mode.

Apply the manifest: kubectl apply -f kilo‑k3s.yaml After deployment each node gets two interfaces:

kilo0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 ...
  inet 10.4.0.1/16 scope global kilo0
kube-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 ...
  inet 10.42.0.1/24 scope global kube-bridge

kilo0 is the WireGuard virtual interface; kube-bridge connects the local container veth pairs.

3. Connecting Local Clients to the Cloud Cluster

Assume four public‑cloud nodes (AWS, Azure, GCP, Alibaba) with Service CIDR 10.43.0.0/16 and Pod CIDR 10.42.0.0/16. Each node receives a /24 Pod subnet (10.42.0.0/24, 10.42.1.0/24, …).

Use a separate WireGuard interface wg0 on each node (full mesh) and manage configurations with wg‑gen‑web. Add the local client as a peer on the AWS node, allowing the Pod and Service CIDRs:

[Peer]
PublicKey = ...
AllowedIPs = 10.42.0.0/24, 10.43.0.0/16
Endpoint = aws.example.com:51820

Copy the updated wg0.conf from each cloud node to the local machine, remove the PresharedKey entries, and add the appropriate Endpoint values.

[Interface]
Address = 10.0.0.5/32
PrivateKey = ...

[Peer]
PublicKey = JgvmQFmhUtUoS3xFMFwEgP3L1Wnd8hJc3laJ90Gwzko=
AllowedIPs = 10.0.0.1/32, 192.168.10.0/24, 10.42.0.0/24, 10.43.0.0/16
Endpoint = aws.example.com:51820

# Aliyun peer
[Peer]
PublicKey = kVq2ATMTckCKEJFF4TM3QYibxzlh+b9CV4GZ4meQYAo=
AllowedIPs = 10.0.0.4/32, 192.168.40.0/24, 10.42.3.0/24
Endpoint = aliyun.example.com:51820

# GCP peer
[Peer]
PublicKey = qn0Xfyzs6bLKgKcfXwcSt91DUxSbtATDIfe4xwsnsGg=
AllowedIPs = 10.0.0.3/32, 192.168.30.0/24, 10.42.2.0/24
Endpoint = gcp.example.com:51820

# Azure peer
[Peer]
PublicKey = OzdH42suuOpVY5wxPrxM+rEAyEPFg2eL0ZI29N7eSTY=
AllowedIPs = 10.0.0.2/32, 192.168.20.0/24, 10.42.1.0/24
Endpoint = azure.example.com:51820

Start WireGuard on the local machine and you can access pods and services across all cloud providers.

For full service name resolution from any device, configure CoreDNS to forward .svc.cluster.local queries through the WireGuard mesh.

References:

Flannel – https://github.com/flannel-io/flannel

Wormhole – https://github.com/gravitational/wormhole

Kilo – https://github.com/squat/kilo

Calico – https://www.projectcalico.org/introducing-wireguard-encryption-with-calico/

k3s control‑plane deployment – https://fuckcloudnative.io/posts/deploy-k3s-cross-public-cloud/#4-部署控制平面

Topology label – https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesioregion

Kilo location annotation – https://github.com/squat/kilo/blob/main/docs/annotations.md#location

kgctl – https://github.com/squat/kilo/blob/main/docs/kgctl.md

cloud nativeKubernetesCNIK3swireguardKilo
Programmer DD
Written by

Programmer DD

A tinkering programmer and author of "Spring Cloud Microservices in Action"

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.