How Rancher Implements Overlay and Flat Networks with CNI
This article explains Rancher's use of the CNI specification to build IPsec and VXLAN overlay networks, details the evolution of container networking models, and describes how to configure a flat network using custom bridges and ebtables rules for seamless container communication.
1. Introduction
The article introduces container networking and the goal of achieving a flat network on a container cloud platform.
Content includes:
What is the CNI interface;
Implementation of container networking based on CNI – IPsec and VXLAN;
Direct routing access to containers.
2. The Container Era
2.1 Background
Since Docker appeared in 2013, enterprises have increasingly adopted containers, but they still face problems such as data persistence, network selection, and multi‑cloud deployment.
Rancher, an open‑source container‑cloud platform, has been deployed by over 4,000 users worldwide.
2.2 Types of Container Networks
2.2.1 Original container networks
Describes bridge, host, and container modes based on the host’s Linux bridge (docker0) and their limitations, requiring NAT for cross‑host communication.
Bridge mode
HOST mode
Container mode
2.2.2 Evolution of container networking
Two main specifications have emerged: Container Networking Model (CNM) and Container Networking Interface (CNI). Both are plugin‑based; CNM is Docker‑originated, CNI is driven by Google/Kubernetes and is more flexible.
2.3 CNM and CNI Overview
2.3.1 CNM
CNM is a Docker‑proposed spec adopted by projects such as Calico, Weave, etc. Libnetwork implements CNM, providing network sandbox, endpoints, and network concepts.
2.3.2 CNI
CNI is a lightweight spec defining a JSON contract between a container runtime and network plugins. It requires two commands (add and delete) and supports IPAM, L2/L3, while leaving L4 port mapping to the runtime.
3. Rancher’s Overlay Network Implementation
Rancher rewrote its networking stack to fully support CNI and added support for third‑party CNI plugins. It provides both IPsec and VXLAN overlay networks.
3.1 IPsec implementation
Describes the IPsec deployment, the change of interface name, and shows the Dockerfile where rancher‑net uses the CNI bridge plugin and a custom IPAM implementation (rancher‑cni‑ipam).
3.2 VXLAN implementation
Explains how to enable VXLAN by disabling the default IPsec environment template and creating a new one; the VXLAN driver creates a vtep device and uses UDP 4789 for overlay traffic.
4. Rancher Flat Network Implementation
Flat networking assigns business IPs directly to containers, eliminating NAT but consuming more address space. Rancher achieves this by creating a custom bridge (mybridge) and connecting containers via veth pairs.
The workflow includes routing configuration, ebtables rules to allow traffic from the metadata service (169.254.169.250) and ARP responses, and optional three‑layer bridging between docker0 and the CNI bridge.
Drop All traffic from veth-cni except:
IP response from 169.254.169.250
ARP response from 10.43.0.2
ebtables -t broute -A BROUTING -i veth-cni -j DROP
ebtables -t broute -I BROUTING -i veth-cni -p ipv4 --ip-source 169.254.169.250 -j ACCEPT
ebtables -t broute -I BROUTING -i veth-cni -p arp --arp-opcode 2 --arp-ip-src 10.43.0.2 -j ACCEPT
Drop ARP request for 10.43.0.2 on eth1
ebtables -t nat -D POSTROUTING -p arp --arp-opcode 1 --arp-ip-dst 10.43.0.2 -o eth1 -j DROPEfficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.