Integrating OpenStack and Kubernetes Networks with Kube-OVN: Cluster Interconnect and Shared OVN Modes
This guide explains how to use Kube-OVN to connect OpenStack virtual machines and Kubernetes containers by configuring cluster interconnect or shared OVN modes, covering prerequisites, OVN‑IC database deployment, Kubernetes and OpenStack side settings, and example manifests for creating Pods in OpenStack subnets.
When you miss the rich networking capabilities of the SDN world while working in cloud‑native environments, Kube‑OVN provides a solution; this series introduces its advanced features and usage paths to help you solve container networking challenges.
Cluster Interconnect works like the OVN‑IC multi‑cluster interconnect method, but replaces the two cluster ends with OpenStack and Kubernetes. Prerequisites include non‑overlapping CIDRs, a set of machines reachable by both clusters for the interconnect controller, and designated gateway nodes.
Deploy OVN‑IC Database
docker run --name=ovn-ic-db -d --network=host \
-v /etc/ovn/:/etc/ovn \
-v /var/run/ovn:/var/run/ovn \
-v /var/log/ovn:/var/log/ovn \
kubeovn/kube-ovn:v1.10.6 bash start-ic-db.shKubernetes Side Operations
Create a ConfigMap ovn-ic-config in the kube-system namespace with fields such as enable-ic , az-name , ic-db-host , ic-nb-port , ic-sb-port , gw-nodes , and auto-route :
apiVersion: v1
kind: ConfigMap
metadata:
name: ovn-ic-config
namespace: kube-system
data:
enable-ic: "true"
az-name: "az1"
ic-db-host: "192.168.65.3"
ic-nb-port: "6645"
ic-sb-port: "6646"
gw-nodes: "az1-gw"
auto-route: "true"OpenStack Side Operations
Create a logical router in OpenStack and set the availability zone name (different from other clusters):
# openstack router create router0 ovn-nbctl set NB_Global . name=op-azStart the OVN‑IC controller pointing to the database:
/usr/share/ovn/scripts/ovn-ctl \
--ovn-ic-nb-db=tcp:192.168.65.3:6645 \
--ovn-ic-sb-db=tcp:192.168.65.3:6646 \
--ovn-northd-nb-db=unix:/run/ovn/ovnnb_db.sock \
--ovn-northd-sb-db=unix:/run/ovn/ovnsb_db.sock \
start_icConfigure the OVS bridge to act as an internet gateway:
ovs-vsctl set open_vswitch . external_ids:ovn-is-interconn=trueSet up logical router ports to connect the OpenStack router router0 with the interconnect switch ts and enable route advertisement and learning:
ovn-nbctl lrp-add router0 lrp-router0-ts 00:02:ef:11:39:4f 169.254.100.73/24
ovn-nbctl lsp-add ts lsp-ts-router0 -- lsp-set-addresses lsp-ts-router0 router -- lsp-set-type lsp-ts-router0 router -- lsp-set-options lsp-ts-router0 router-port=lrp-router0-ts
ovn-nbctl lrp-set-gateway-chassis lrp-router0-ts {gateway chassis} 1000
ovn-nbctl set NB_Global . options:ic-route-adv=true options:ic-route-learn=trueVerify learned Kubernetes routes:
# ovn-nbctl lr-route-list router0Shared Underlying OVN
In this mode, OpenStack and Kubernetes share the same OVN instance, aligning VPC and Subnet concepts for tighter control. OpenStack must use networking-ovn as the Neutron backend.
Neutron Configuration Changes
Edit /etc/neutron/plugins/ml2/ml2_conf.ini to point to the OVN central nodes:
[ovn]
ovn_nb_connection = tcp:[192.168.137.176]:6641,tcp:[192.168.137.177]:6641,tcp:[192.168.137.178]:6641
ovn_sb_connection = tcp:[192.168.137.176]:6642,tcp:[192.168.137.177]:6642,tcp:[192.168.137.178]:6642
ovn_l3_scheduler = OVN_L3_SCHEDULERUpdate each node’s OVS configuration:
ovs-vsctl set open . external-ids:ovn-remote=tcp:[192.168.137.176]:6642,tcp:[192.168.137.177]:6642,tcp:[192.168.137.178]:6642
ovs-vsctl set open . external-ids:ovn-encap-type=geneve
ovs-vsctl set open . external-ids:ovn-encap-ip=192.168.137.200Using OpenStack Resources in Kubernetes
Query existing OpenStack resources (router, network, subnet, server) and the synchronized VPC in Kubernetes:
# openstack router list
# openstack network list
# openstack subnet list
# openstack server list
# kubectl get vpcCreate a namespace, bind it to the imported VPC, define a subnet, and launch a pod:
apiVersion: v1
kind: Namespace
metadata:
name: net2
---
apiVersion: kubeovn.io/v1
kind: Vpc
metadata:
name: neutron-22040ed5-0598-4f77-bffd-e7fd4db47e93
spec:
namespaces:
- net2
---
apiVersion: kubeovn.io/v1
kind: Subnet
metadata:
name: net2
spec:
vpc: neutron-22040ed5-0598-4f77-bffd-e7fd4db47e93
cidrBlock: 12.0.1.0/24
natOutgoing: false
---
apiVersion: v1
kind: Pod
metadata:
name: ubuntu
namespace: net2
spec:
containers:
- name: ubuntu
image: kubeovn/kube-ovn:v1.8.0
command: ["sleep", "604800"]
imagePullPolicy: IfNotPresent
restartPolicy: AlwaysFor more details, refer to the official Kube‑OVN documentation at https://kubeovn.github.io/docs/v1.10.x/ .
Cloud Native Technology Community
The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.