How Neutron OVS Implements Multi‑Tenant Networking: From Compute Nodes to External Networks
This article explains the complete OpenStack Neutron OVS networking stack, covering compute‑node, network‑node and control‑node models, dual‑node deployment, flow‑table design for flat, VLAN and VxLAN networks, L3 routing, external network integration and practical CLI examples.
Neutron OVS Network Architecture
The Neutron Open vSwitch (OVS) architecture separates the network into four logical layers: the local network layer (Local VLAN IDs), the tenant network layer (Tenant VIDs for Flat, VLAN, VxLAN, GRE), the service network layer (L3 router and DHCP), and the external network layer (Internet connectivity).
Compute Node Network Model
VM traffic leaves the virtual NIC (vNIC) as a veth‑pair Tap device (eth0‑vnet0) and is injected into the kernel TCP/IP stack. The tap device connects to a Linux Bridge (or directly to OVS in newer versions) which applies security‑group rules via iptables. A second veth‑pair Tap bridges the Linux Bridge to the OVS integration bridge br‑int, where a locally allocated VLAN ID (Local VLAN tag) provides Layer‑2 switching.
The traffic receives a Local VLAN tag in br‑int and loses it when exiting to the provider bridge.
Before reaching the physical network, an "inner‑to‑outer VID conversion" translates the Local VLAN ID to a Tenant VID (VLAN ID, VxLAN VNI, or GRE key) as defined in ml2_conf.ini.
Example ml2_conf.ini snippet:
# /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2_type_vlan]
network_vlan_ranges = provider1:1:1000
[ml2_type_vxlan]
vni_ranges = 1:1000
[ml2_type_gre]
tunnel_id_ranges = 1:1000Network Node Network Model
The network node provides the first‑hop gateway for VM traffic to the external world. It runs an L3 router (implemented by the neutron‑l3‑agent) and DHCP services inside separate Linux network namespaces (e.g., qrouter‑XXX and qdhcp‑XXX). Each router namespace contains a port connected to br‑int and a provider bridge port ( br‑ex) that attaches to the physical network.
Control Node Network Model
The control node runs only the neutron‑server process, which exposes the REST API and forwards RPC calls to agents on other nodes. No data‑plane functions are performed here.
Dual‑Node Network Practice
A two‑node OpenStack deployment (controller + compute) is used to demonstrate the flow of traffic across the layers.
OVS Bridge Initial Flow Entries
Typical flow dump commands and their outputs are shown below.
# ovs-ofctl dump-flows br-int
cookie=0xea50ae8e9cc7754f, duration=69221.341s, table=0, priority=2,in_port="int-br-provider" actions=drop
cookie=0xea50ae8e9cc7754f, duration=69222.176s, table=0, priority=0 actions=resubmit(,60) # ovs-ofctl dump-flows br-tun
cookie=0xbababcd06622b167, duration=69203.910s, table=0, priority=1,in_port="patch-int" actions=resubmit(,2)Provider bridges ( br‑provider, br‑provider‑1, br‑ex) simply forward packets to the physical network after optional VLAN stripping or modification.
Flat Network
Flat networks have no VLAN tagging; each flat network occupies a dedicated physical NIC. Configuration example:
# /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat
mechanism_drivers = openvswitch
[ml2_type_flat]
flat_networks = provider
# /etc/neutron/plugins/ml2/openvswitch_agent.ini
[ovs]
bridge_mappings = provider:br-providerCreating a flat network via CLI:
openstack network create --enable --project admin \
--provider-network-type flat --provider-physical-network provider flat-net-1
openstack subnet create --network flat-net-1 --dhcp \
--ip-version 4 --subnet-range 192.168.1.0/24 --gateway 192.168.1.1 flat-subnet-1When DHCP is enabled, a network:dhcp port is created inside a dedicated namespace (e.g., qdhcp‑UUID) and attached to br‑int with a Local VLAN tag of 1.
VLAN Network
VLAN networks use 802.1q tagging. Example configuration:
# /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vlan
mechanism_drivers = openvswitch
[ml2_type_vlan]
network_vlan_ranges = provider1:1:1000
# /etc/neutron/plugins/ml2/openvswitch_agent.ini
[ovs]
bridge_mappings = provider1:br-provider-1Creating a VLAN network and subnet:
openstack network create --enable --project admin \
--provider-network-type vlan --provider-physical-network provider1 \
--provider-segment 100 vlan-net-100
openstack subnet create --network vlan-net-100 --dhcp \
--ip-version 4 --subnet-range 192.168.1.0/24 --gateway 192.168.1.1 vlan-subnet-1Flow‑table conversion example (outbound):
In br‑provider-1, a flow changes Local VLAN tag 2 to Tenant VLAN 100.
In br‑int, a reverse flow changes Tenant VLAN 100 back to Local VLAN 2.
VxLAN Network
VxLAN provides an overlay network (L2‑in‑L3) using UDP encapsulation. Each VxLAN network is identified by a VNI (24‑bit).
# /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan
mechanism_drivers = openvswitch
[ml2_type_vxlan]
vni_ranges = 1:1000
# /etc/neutron/plugins/ml2/openvswitch_agent.ini
[ovs]
datapath_type = system
bridge_mappings = provider:br-provider
bridge_mappings = provider1:br-provider-1
tunnel_bridge = br-tun
local_ip = 10.0.0.1
[agent]
tunnel_types = vxlanCreating a VxLAN network and subnet:
openstack network create --enable --project admin \
--provider-network-type vxlan --provider-segment 1000 vxlan-net-1000
openstack subnet create --network vxlan-net-1000 --dhcp \
--ip-version 4 --subnet-range 172.16.1.0/24 --gateway 172.16.1.1 vxlan-subnet-1Key flow‑table actions:
Table 0 classifies packets from internal ( patch‑int) and external ( vxlan‑0a…) ports.
Table 4 performs "outer‑to‑inner VID conversion" (VNI → local VLAN) and learns MAC addresses.
Table 20 handles unicast packets, stripping VLAN tags and adding the VNI before sending to the tunnel port.
Table 22 processes broadcast/multicast packets similarly.
Based on OVS L3 Router
The Neutron L3 router runs as a service plugin. Each router creates a namespace qrouter‑UUID with an internal interface ( qr‑XXX) for tenant networks and an external interface ( qg‑XXX) for the provider bridge.
Example router creation:
openstack router create --enable --centralized --project admin router1Adding subnets (different IP ranges) to the router connects their gateway ports to the router namespace, enabling inter‑subnet routing.
Based on OVS External Network
An external (provider) network is defined as a flat network with the physical label external. It is used for floating IPs and Internet access.
# /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat
mechanism_drivers = openvswitch
[ml2_type_flat]
flat_networks = external
[ovs]
bridge_mappings = external:br-exCreating the external network and subnet:
openstack network create --enable --project admin \
--external --provider-network-type flat \
--provider-physical-network external ext_net
openstack subnet create --network ext_net --ip-version 4 \
--no-dhcp --subnet-range 172.18.22.0/24 \
--allocation-pool start=172.18.22.241,end=172.18.22.250 \
--gateway 172.18.22.1 ext_subnet-1The external router ( router‑ext) gets a gateway port ( qg‑XXX) with a fixed IP (e.g., 172.18.22.245) and a default route to the physical gateway (172.18.22.1). A tenant network attached to this router can reach the Internet, and floating IPs are provided via NAT rules inside the router namespace:
# iptables -t nat -L -n -v
Chain neutron-l3-agent-float-snat (1 references)
pkts bytes target prot opt in out source destination
0 0 SNAT all -- * * 192.168.1.3 0.0.0.0/0 to:172.18.22.242After adding a floating IP and associating it with a VM, the VM becomes reachable from the external network, and outbound traffic is NAT‑ed to the external IP.
Key Takeaways
Neutron OVS separates local and tenant VLAN spaces and performs bidirectional VID conversion to avoid conflicts between flat, VLAN and VxLAN networks.
Network namespaces isolate DHCP, router, and DHCP agents per tenant network, ensuring IP and DNS isolation.
Flow tables on OVS bridges implement MAC learning, VLAN/VNI translation, and traffic steering for unicast, broadcast and multicast traffic.
External connectivity is achieved by attaching a flat provider network to a Neutron L3 router, using NAT for floating IPs.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
