Understanding Flannel CNI Plugin and VXLAN Communication in Kubernetes
This guide explains how Flannel works as a Kubernetes CNI overlay network, detailing its VXLAN mode, the roles of cni0, flannel.1 and flanneld, routing and FDB handling, and provides command‑line examples to trace pod‑to‑pod traffic across nodes.
Flannel is one of the CNI network plugins for Kubernetes, providing a host overlay network that supports multiple forwarding modes, with VXLAN over UDP being the most common.
Flannel Features
Assigns a unique virtual IP to every container across the cluster.
Creates an overlay network that forwards packets unchanged to the destination container.
Installs a virtual interface flannel0 that receives Docker bridge traffic and routes it via VXLAN.
Uses etcd to keep configuration consistent across all nodes; each node watches etcd for changes.
Component Explanation
cni0 : a bridge device; each pod gets a veth pair, one end in the pod (eth0) and the other attached to cni0 .
flannel.1 : the VXLAN overlay device that encapsulates and decapsulates packets between nodes.
flanneld : the agent running on each host that obtains a subnet from the cluster address space and registers MAC/IP information in etcd.
Pod‑to‑Pod Communication Flow (different nodes)
Data generated in a pod is sent to cni0 based on the pod's routing table.
cni0 forwards the packet to the VXLAN device flannel.1 .
flannel.1 looks up the destination IP, obtains the remote VTEP MAC from etcd via the flanneld daemon, and encapsulates the packet.
The encapsulated packet is sent to the remote node's VTEP IP.
The remote node's kernel receives the VXLAN packet, decapsulates it, and passes it to its own flannel.1 device.
flannel.1 forwards the inner packet to cni0 , which finally delivers it to the target pod via the appropriate veth pair.
Example routing tables inside pods (VXLAN mode) show the default gateway as the cni0 IP (e.g., 172.20.0.1 ) and the pod CIDR (e.g., 172.20.0.0/16 ).
# kubectl -n stack exec -it api-0 -- bash
ip route show
default via 172.20.0.1 dev eth0
172.20.0.0/24 dev eth0 proto kernel scope link src 172.20.0.73
172.20.0.0/16 via 172.20.0.1 dev eth0Host routing tables reveal how traffic to the remote pod subnet is directed to the VXLAN device flannel.1 :
# ip route -n
default via 10.19.114.1 dev eth0
10.19.114.0/24 dev eth0 proto kernel scope link src 10.19.114.100
172.20.0.0/24 dev cni0 proto kernel scope link src 172.20.0.1
172.20.1.0/24 via 172.20.1.0 dev flannel.1 onlinkThe kernel consults the forwarding database (FDB) to map destination MAC addresses to remote VTEP IPs. If the MAC is missing, the kernel generates an “L2 miss” event, which flanneld handles by querying etcd for the appropriate node IP and updates the FDB.
# /sbin/bridge fdb show dev flannel.1
42:6e:8b:9b:e2:73 dst 10.19.114.101 self permanentAfter the outer VXLAN header is added, the packet traverses the physical network, reaches the destination node, is decapsulated, and the inner Ethernet frame is delivered to the target pod via cni0 and its associated veth pair (e.g., vethf4995a29 ).
Commands such as brctl show list the bridge interfaces and attached veth devices, confirming the relationship between host bridges, VXLAN devices, and pod network interfaces.
# brctl show
bridge name bridge id STP enabled interfaces
cni0 8000.a656432b14cf no veth1f7db117
veth3ee31d24
...
flannel.1 8000.024216a031b6 noOverall, Flannel’s VXLAN mode relies on etcd for subnet allocation, kernel FDB for MAC‑to‑VTEP resolution, and the combination of cni0 , flannel.1 , and flanneld to provide seamless pod‑to‑pod networking across a Kubernetes cluster.
360 Tech Engineering
Official tech channel of 360, building the most professional technology aggregation platform for the brand.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.