Cloud Native 8 min read

Understanding kube-ovn-cni: How the CNI Plugin Manages Pod Network Interfaces

This article explains how the kube-ovn-cni component functions as a CNI plugin in Kubernetes, detailing its cmdAdd interface, interaction with the kube-ovn daemon, creation of veth pairs, OVS bridge integration, and network namespace configuration to manage pod network interfaces securely and efficiently.

Cloud Native Technology Community
Cloud Native Technology Community
Cloud Native Technology Community
Understanding kube-ovn-cni: How the CNI Plugin Manages Pod Network Interfaces

Introduction

Kube-OVN is a Kubernetes networking project built on OVS/OVN that brings mature OpenStack networking capabilities to Kubernetes, greatly enhancing container network security, operability, manageability, and performance, and providing unique value for the Kubernetes ecosystem.

This series will share kube-ovn-controller, pod IP management, pod NIC management (CNI plugin), pod security group features, and a unified Vagrant compilation and testing environment, offering an in‑depth analysis of Kube-OVN to help you get started quickly.

Author: Kube-OVN community contributor Mr. Li

Author's Note

Previously we described how the kube-ovn-controller component assigns IPs to pods when a pod creation event is received. This article introduces how the kube-ovn-cni component creates and manages pod network interfaces. The kube-ovn-cni component is essentially a CNI plugin deployed as a DaemonSet.

CNI Flow

When kubelet creates a pod container, it selects the CNI plugin based on the configuration file with the smallest number in /etc/cni/net.d/ (e.g., 01-kube-ovn.conflist ). The kube-ovn CNI configuration is shown below:

Based on this configuration, kubelet builds request parameters according to the CNI specification and calls the /opt/cni/bin/kube-ovn binary’s cmdAdd interface. The binary implements the following logic:

kube-ovn CNI Binary cmdAdd Interface

When kubelet creates a pod sandbox, it invokes the cmdAdd interface. The kube-ovn CNI binary’s implementation does little more than construct an HTTP API request to the kube-ovn-daemon component and return the response to kubelet.

kube-ovn-daemon (cni‑server) Responds to Add Interface

The kube-ovn-daemon on the host acts as a CNI server, listening on a local Unix socket and handling API requests sent by /opt/cni/bin/kube-ovn .

HTTP Server Initialization and Handler Registration

handleAdd Callback

When a pod is created, kube-ovn-cni enters the handleAdd interface. This interface creates a veth pair, places one end into the pod’s network namespace, configures the IP and routes, and adds the other end to the br-int OVS bridge, linking it to the OVN port via external_ids:iface-id , effectively bringing the interface up.

configureNic Function Handles Container NIC

The main management of the pod’s network interface resides in the configureNic function, which creates the veth pair, moves one end into the pod’s netns, configures IP and routes, and adds the other end to the host’s br-int bridge while setting ingress/egress QoS limits.

configureContainerNic Enters Pod Netns to Configure NIC

We now look at the configureNic function in detail.

Summary

The kube-ovn-cni component essentially acts as a CNI server that works together with the /opt/cni/bin/kube-ovn binary. The binary is invoked by kubelet during pod creation, and it communicates with kube-ovn-cni via a local HTTP API to perform actual pod network interface management.

kube-ovn-cni primarily performs the following tasks: checks pod annotations for IP allocation, creates the veth pair, creates the OVS port, and configures the pod’s network‑namespace NIC IP address and routing.

Additional Resources

Official website: https://www.kube-ovn.io

GitHub: https://github.com/kubeovn/kube-ovn

Slack: https://kube-ovn-slackin.herokuapp.com

WeChat group QR codes are provided in the original article for community interaction.

Cloud NativeKubernetesNetworkingOVSCNIPodKube-OVN
Cloud Native Technology Community
Written by

Cloud Native Technology Community

The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.