Cloud Native 14 min read

Why Build a New Network Plugin? Design Principles and Architecture of Kube-OVN

This article explains the practical motivations behind creating Kube-OVN, outlines its design principles, describes its overall architecture and network model, and details key features such as subnet management, IP allocation, QoS, gateway handling, traffic mirroring, and future development plans.

Cloud Native Technology Community
Cloud Native Technology Community
Cloud Native Technology Community
Why Build a New Network Plugin? Design Principles and Architecture of Kube-OVN

To address unmet customer requirements—such as subnet segmentation, fixed IPs, QoS, VLAN isolation, and traffic mirroring—existing open‑source CNI solutions were deemed insufficient, prompting the development of a custom network plugin based on OVN/OVS, named Kube‑OVN.

The design philosophy transfers mature OpenStack networking concepts (VPC, Subnet, Multi‑tenant, Floating IP, Security Group) to Kubernetes, unifies the data plane under OVN, strives to cover features of other CNI projects, and simplifies installation and usage.

Kube‑OVN’s architecture consists of three core components: kube-ovn-controller (watching Kubernetes resources and translating changes to the OVN northbound database), kube-ovn-cni (a thin CNI shim that forwards add/del commands), and kube-ovn-cniserver (handling annotations and configuring OVS on each node). The controller also writes network details (IP, MAC, gateway) back to pod annotations.

The network model adopts a "Namespace‑per‑Subnet" approach, where each subnet maps to an OVN logical switch, enabling cross‑node subnets, fine‑grained ACLs, and future VPC‑style multi‑tenant isolation. All switches connect to a global logical router for default inter‑pod connectivity, while a special Node subnet links host and container networks.

Key functional implementations include:

Subnet definition via annotations, supporting CIDR, gateway, reserved IPs, and ACLs.

IP allocation supporting both dynamic (OVN‑provided) and static (annotation‑based) assignments.

QoS enforcement using OVS ingress policing and port QoS (due to limitations in OVN QoS).

Gateway options: distributed per‑node gateways or centralized per‑Namespace gateways, with optional NAT or direct exposure.

Traffic mirroring via a dedicated mirror0 interface on each host, enabling simple packet capture with tcpdump -i mirror0 .

Additional features such as OVN‑based load balancing (L2 LB with IP‑hash), NetworkPolicy via ACLs, and high‑availability mechanisms.

Recent work focuses on IPv6 support, integration of monitoring/tracing tools (IPFIX, sFlow, NetFlow), and performance enhancements using DPDK to accelerate OVS data‑plane processing.

The article concludes with a Q&A covering component responsibilities, load‑balancing strategies, multi‑tenant considerations, differences from ovn‑kubernetes, and practical deployment concerns.

cloud-nativekubernetesCNIKube-OVNOVNNetwork Plugin
Cloud Native Technology Community
Written by

Cloud Native Technology Community

The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.