Technical Overview of Kube-OVN Based Network Solution for Mixed VM and Container Environments
This article presents a detailed technical overview of how ByteDance selected Kube-OVN for a mixed virtual‑machine and container networking scenario, describes the initial network design, identifies performance issues, and outlines three improvement plans including OVS‑DPDK, source‑route optimization, and a switch from Geneve to VXLAN.
At KubeCON China 2021, ByteDance senior engineer Fei Xiang shared the team’s experience of selecting and implementing Kube-OVN for a mixed VM‑and‑container Kubernetes deployment, explaining the technical evaluation process and the reasons for choosing Kube-OVN.
The selection criteria included Kube-OVN’s centralized IPAM for flexible address management, its VPC/Subnet model that aligns with traditional IaaS networking, a control plane based on OVN for advanced orchestration, and an OVS‑based data plane that supports offload and DPDK acceleration.
The initial network design leveraged Kube-OVN’s features: pod‑to‑pod traffic uses Geneve tunnels, a distributed gateway subnet routes external traffic through the node’s OVN0 interface, and VM pods use underlay addresses with ECMP‑enabled gateways.
Images illustrating the pod network traffic model and service traffic model were provided.
After deployment, three major issues were observed: excessive source routes per pod affecting scalability, long traffic paths in VM scenarios impacting performance, and limited industry adoption of Geneve tunnels raising compatibility concerns.
To address these, three improvement plans were proposed:
Plan 1 – OVS‑DPDK: Replace the kernel OVS with OVS‑DPDK, create vhost‑client sockets via cniserver for VM creation, and adjust container networking to support multiple NICs.
Plan 2 – Source‑Route Optimization (pending rollout): Deploy a public logical switch (172.168.0.1/30) with a localnet port, issue a default route (0.0.0.0/0 via 172.168.0.2) to reduce per‑pod routes, configure MAC‑binding, and modify the br‑ex controller to rewrite MACs before forwarding.
Plan 3 – Replace Geneve with VXLAN: Simplify tunneling by using VXLAN (only VNI as tunnel ID), adjust OVN tables to handle MAC binding and egress pipelines, and ensure no MAC address conflicts within the same VPC.
The article concludes with a vision for future Kube-OVN capabilities, such as multi‑cluster interconnect, tighter VPC integration, enhanced service features, NFV functions (LB, NAT, VPN), richer observability tools, and the possibility of a data‑plane implementation independent of OVN.
Additional resources include the Kube‑OVN website, GitHub repository, Slack channel, and QR codes for joining the community.
Cloud Native Technology Community
The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.