Understanding Underlay and Overlay Network Models in Kubernetes
This article provides a comprehensive overview of Kubernetes networking by explaining the concepts of underlay and overlay network models, describing their implementations such as flannel host‑gw, Calico BGP, IPVLAN/MACVLAN, and tunneling technologies like VxLAN and IPIP, and offering practical references for each approach.
Overview
This article explores the network models used in Kubernetes and analyses various implementations of both underlay and overlay networks.
Underlay Network Model
What is Underlay Network
The Underlay Network refers to the physical infrastructure of network devices such as switches and routers, for example DWDM , which form the physical topology that carries packets between networks.
The underlay can be a Layer‑2 network (typical example: Ethernet ) or a Layer‑3 network (typical example: the Internet ). Layer‑2 technologies include vlan , while Layer‑3 technologies include routing protocols such as OSPF and BGP .
Underlay Network in Kubernetes
In Kubernetes, a common underlay pattern is to treat each host as a router; Pods learn routing entries to achieve cross‑node communication. Typical implementations are the flannel host‑gw mode and the Calico BGP mode.
flannel host‑gw
In the flannel host‑gw mode each node must reside on the same Layer‑2 network and act as a router. Traffic between nodes is forwarded via routing tables, effectively turning the physical network into an underlay network .
Calico BGP
BGP ( Border Gateway Protocol ) is a decentralized routing protocol that maintains IP routing tables (or prefix tables) to provide reachability between Autonomous Systems ( AS ). Calico runs several daemons: flanneld (for flannel), Felix (the BGP client), Bird (the BGP client), and a Router Reflector ( RR ) that reduces the number of BGP sessions inside an AS. The BGP client obtains routes from Felix and distributes them to other BGP peers, while the RR aggregates routes to simplify the mesh.
IPVLAN & MACVLAN
IPVLAN and MACVLAN are NIC virtualization techniques. IPVLAN allows a single physical NIC to have multiple IP addresses while sharing one MAC address; MACVLAN does the opposite, giving a NIC multiple MAC addresses while each virtual interface may lack an IP address. Both are considered Overlay network technologies because they virtualise the network stack on top of the physical infrastructure.
In Kubernetes, the typical CNI plugins that use IPVLAN/MACVLAN are multus and danm .
multus
multus is an Intel‑open‑source CNI that combines the default CNI with additional plugins, providing SR‑IOV support so that a pod can attach to a virtual function ( VF ). It enables the use of IPVLAN/MACVLAN features.
When a new pod is created, the SR‑IOV plugin moves the host VF into the pod’s network namespace, sets the interface name according to the CNI configuration, and brings the VF up.
danm
DANM is Nokia’s open‑source CNI project that brings carrier‑grade networking into Kubernetes. Like multus, it supports SR‑IOV/DPDK and also works with IPVLAN.
Overlay Network Model
What is Overlay
An overlay network builds a virtual logical network on top of an underlay using network‑virtualisation techniques. It typically employs tunnelling protocols ( tunneling ) to encapsulate packets, allowing a virtual network to be created without changing the physical topology.
Common Tunnel Technologies
Generic Routing Encapsulation ( GRE ) – encapsulates IPv4/IPv6 packets at Layer‑3.
Virtual Extensible LAN ( VxLAN ) – encapsulates Layer‑2 Ethernet frames inside UDP packets, using port 4789 (flannel’s default is 8472). VxLAN expands the 12‑bit VLAN ID space to a 24‑bit VNID, supporting up to 16 million logical networks.
IPIP
IP in IP (IPIP) is another tunnel protocol that encapsulates an IP packet inside another IP packet. It requires the kernel module ipip.ko , which can be loaded with modprobe ipip and verified with lsmod | grep ipip .
VxLAN
Both flannel and Calico implement VxLAN using Linux kernel support (available since kernel 3.7, recommended on 3.9+). In a Kubernetes cluster, flannel creates a VxLAN device (e.g., flannel.1 ) for each node, assigns a VNID, and maintains a forwarding database to map remote VxLAN MAC addresses to node IPs.
Weave VxLAN
Weave also uses VxLAN (referred to as fastdp ) but implements it via the openvswitch datapath module and encrypts traffic. Fastdp works in kernel mode on Linux 3.12+; on older kernels it falls back to user‑space “sleeve mode”.
Reference
https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#host-gw https://projectcalico.docs.tigera.io/networking/bgp https://www.weave.works/docs/net/latest/concepts/router-encapsulation/ https://github.com/k8snetworkplumbingwg/sriov-network-device-plugin https://github.com/nokia/danm
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.