Cloud Native 17 min read

How Calico’s IPIP Mode Enables Cross‑Node Communication in Kubernetes

This article examines Calico’s IPIP networking mode in Kubernetes, detailing its architecture, core components, IPIP vs BGP operation, route analysis, packet‑capture findings, and pod‑to‑service communication to illustrate how IP encapsulation facilitates cross‑node traffic.

Ops Development Stories
Ops Development Stories
Ops Development Stories
How Calico’s IPIP Mode Enables Cross‑Node Communication in Kubernetes

Introduction

This article analyses the IPIP networking mode of Calico in Kubernetes, explaining the devices (calixxxx, tunl0) created by the mode and how cross‑node network communication works.

Calico Overview

Calico is a popular CNI plugin for Kubernetes, known for performance and flexibility. It provides L3 routing, network security, and integrates with cloud platforms via BGP. Each node runs a virtual router (vRouter) that advertises pod routes using BGP, eliminating the need for NAT or overlay tunnels.

Calico Architecture and Core Components

Calico architecture diagram
Calico architecture diagram

Key components:

Felix – agent on each workload node that configures routes and ACLs.

etcd – highly‑available key‑value store for Calico data.

BGP Client (BIRD) – reads kernel routes from Felix and distributes them.

BGP Route Reflector (BIRD) – reduces mesh complexity in large deployments.

How Calico Works

Calico treats each host’s protocol stack as a router and each pod as an endpoint. Standard BGP runs between the routers, allowing them to learn the full network topology and forward traffic at L3.

Two Network Modes

1) IPIP – encapsulates an IP packet inside another IP packet, effectively creating an IP‑level tunnel (similar to a bridge). The kernel implementation resides in

net/ipv4/ipip.c

.

2) BGP – uses the Border Gateway Protocol to exchange routing prefixes between nodes, a vector routing protocol that decides routes based on path attributes rather than IGP metrics.

IPIP Mode Analysis

In the author’s environment IPIP is enabled. The following commands illustrate pod discovery, ping tests, and routing tables.

<code># kubectl get po -o wide -n paas | grep hello

demo-hello-perf-d84bffcb8-7fxqj   1/1   Running   0   9d   10.20.105.215   node2.perf  <none>   <none>
demo-hello-sit-6d5c9f44bc-ncpql   1/1   Running   0   9d   10.20.42.31   node1.sit   <none>   <none>
</code>

Ping from the perf pod to the sit pod:

<code># ping 10.20.42.31
PING 10.20.42.31 (10.20.42.31) 56(84) bytes of data.
64 bytes from 10.20.42.31: icmp_seq=1 ttl=62 time=5.60 ms
64 bytes from 10.20.42.31: icmp_seq=2 ttl=62 time=1.66 ms
64 bytes from 10.20.42.31: icmp_seq=3 ttl=62 time=1.79 ms
--- 10.20.42.31 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 6ms
rtt min/avg/max/mdev = 1.662/3.015/5.595/1.825 ms
</code>

Routing table inside the perf pod:

<code># route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         169.254.1.1     0.0.0.0         UG    0      0        0 eth0
169.254.1.1     0.0.0.0         255.255.255.255 UH    0      0        0 eth0
</code>

Routing table on the host node (node2.perf) shows a

tunl0

entry that forwards the 10.20.42.0/26 subnet via gateway 172.16.35.4:

<code># route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.36.1     0.0.0.0         UG    100    0        0 eth0
10.20.42.0      172.16.35.4     255.255.255.192 UG    0      0        0 tunl0
10.20.105.196   0.0.0.0         255.255.255.255 UH    0      0        0 cali4bb1efe70a2
...</code>

On the sit node (node1.sit) a similar

tunl0

entry forwards traffic toward the perf node. The

cali04736ec14ce

interface is one end of a veth pair created for the pod; the other end appears inside the pod as

eth0@if122964

:

<code># ip a
1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 ...
2: tunl0@NONE: &lt;NOARP&gt; mtu 1480 ...
4: eth0@if122964: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1380 ... inet 10.20.42.31/32 scope global eth0
</code>
<code># ip a | grep -A 5 "cali04736ec14ce"
122964: cali04736ec14ce@if4: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1380 ...
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link
120918: calidd1cafcd275@if4: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1380 ...
</code>

Thus the packet leaves the host via

tunl0

, is encapsulated, and reaches the destination pod through the corresponding veth interface.

Packet Capture Analysis

Running

tcpdump -i eth0 -nn -w icmp_ping.cap

on the sit node while pinging from the perf pod captures a five‑layer packet: the inner pod‑to‑pod IP packet and the outer host‑to‑host IP packet.

Packet layers diagram
Packet layers diagram

The double encapsulation is required because

tunl0

is a tunnel endpoint; the outer IP header carries the packet between the two host nodes.

Encapsulation detail
Encapsulation detail

Pod‑to‑Service Access

Service objects for the two demo applications are listed with

kubectl get svc -o wide -n paas

. Curling the service IP from the sit pod returns HTTP 200, confirming that service traffic also traverses the IPIP tunnel.

<code># curl -I http://10.10.48.254:8080/actuator/health
HTTP/1.1 200
Content-Type: application/vnd.spring-boot.actuator.v3+json
Transfer-Encoding: chunked
Date: Fri, 30 Apr 2021 01:42:56 GMT
</code>
Service traffic capture
Service traffic capture

Conclusion

IPIP mode encapsulates pod traffic in an additional IP layer, sending all packets through the

tunl0

tunnel. This L3 tunnel adds less overhead than a VXLAN overlay but offers weaker security.

Cloud NativeKubernetesNetworkCNICalicoIPIP
Ops Development Stories
Written by

Ops Development Stories

Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.