Mastering OpenStack Neutron SR‑IOV: Boost Network Performance with VLAN & NUMA
This guide explains the performance limitations of Neutron OVS networking, introduces SR‑IOV as a high‑performance I/O virtualization solution, and provides step‑by‑step configuration for enabling SR‑IOV agents, mapping physical networks, creating VLAN and flat networks, handling NUMA affinity, security groups, and bonding, with detailed command examples and XML snippets.
Reference Articles
《未来互联网技术发展编年史,从阿帕网到完全可编程网络》
《SDN — OpenFlow SDN 协议标准》
《SDN — OpenvSwitch 软件架构与运行原理》
《SDN — OpenvSwitch 常用指令和应用示例》
《SDN — Neutron 面向多租户的 VPC 虚拟网络模型》
《SDN — EVPN VxLAN Overlay 技术原理解析》
《SDN — Neutron OVS 网络模型实现原理解析》
《高性能网络 — SR-IOV 单根 I/O 虚拟化》
Neutron OVS Network Model Performance Bottleneck
Neutron OVS creates many virtual network devices (e.g., tap, veth, qbr, br‑int, br‑ethX) on compute nodes, causing significant CPU overhead and limiting network bandwidth; a typical Intel 82599ES 10 GbE NIC may only achieve 5‑6 Gbps throughput.
Therefore, high‑performance scenarios require a better I/O virtualization solution such as SR‑IOV.
Neutron SR‑IOV Network Model
SR‑IOV allows a single physical NIC (PF) to be shared among multiple VMs via virtual functions (VFs), eliminating the need for tap devices, qbr bridges, and OVS, thus reducing CPU overhead and achieving near line‑rate performance (≈9.4 Gbit/s on Intel 82599ES).
Neutron Configure SR‑IOV Agent
Official documentation: https://docs.openstack.org/newton/networking-guide/config-sriov.html
Ensure SR‑IOV and VT‑d are enabled in BIOS.
Enable IOMMU in Linux by adding intel_iommu=on to kernel parameters (e.g., via GRUB).
...linux16 /boot/vmlinuz-3.10.0-862.11.6.rt56.819.el7.x86_64 root=LABEL=img-rootfs ro console=tty0 console=ttyS0,115200n8 crashkernel=auto rhgb quiet intel_iommu=on iommu=pt isolcpus=2-3,8-9 nohz=on nohz_full=2-3,8-9 rcu_nocbs=2-3,8-9 intel_pstate=disable nosoftlockup default_hugepagesz=1G hugepagesz=1G hugepages=16 LANG=en_US.UTF-8Create VFs on each compute node via the PCI SYS interface (e.g., enp129s0f0, enp129s0f1).
# cat /etc/sysconfig/network-scripts/ifcfg-enp129s0f0
DEVICE="enp129s0f0"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
# cat /etc/sysconfig/network-scripts/ifcfg-enp129s0f1
DEVICE="enp129s0f1"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
# echo 16 > /sys/class/net/enp129s0f0/device/sriov_numvfs
# echo 16 > /sys/class/net/enp129s0f1/device/sriov_numvfsVerify that the VFs have been created and are up.
# lspci | grep Ethernet
03:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
...
81:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) # ip link show enp129s0f0
4: enp129s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 90:e2:ba:34:fb:32 brd ff:ff:ff:ff:ff:ff
vf 0 MAC be:40:5c:21:98:31, spoof checking on, link-state auto, trust off, query_rss off
...Persist VFs on reboot.
echo "echo '7' > /sys/class/net/eth3/device/sriov_numvfs" >> /etc/rc.localEnable neutron‑sriov‑nic‑agent service on compute nodes.
# /etc/neutron/plugins/ml2/sriov_agent.ini
[sriov_nic]
physical_device_mappings = sriov1:enp129s0f0,sriov1:enp129s0f1
exclude_devices =
[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver
systemctl enable neutron-sriov-nic-agent.service
systemctl start neutron-sriov-nic-agent.service
# or
neutron-sriov-nic-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/sriov_agent.iniConfigure OVS agent to support DHCP for SR‑IOV VLAN networks.
Map the SR‑IOV physical network (sriov1) to the same OVS physical network on the controller so that DHCP tags can be communicated; otherwise the VLAN ID defaults to 4095.
The DHCP tag is attached to the controller’s OVS bridge br‑int; the OVS agent must know the mapping to install flow rules on br‑provider that translate tenant VLAN IDs to local VLAN IDs.
For flat networks, this extra mapping is unnecessary.
Configure neutron‑server (controller) ML2 settings.
# /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
mechanism_drivers = openvswitch,sriovnicswitch
[ml2_type_flat]
flat_networks = datacentre,sriov1
[ml2_type_vlan]
network_vlan_ranges = datacentre:1:29,sriov1:1:29
[ml2_sriov]
supported_pci_vendor_devs = 8086:10ed
agent_required = TrueRestart neutron‑server after changes.
Configure nova‑scheduler to use PCI passthrough filter.
# /etc/nova/nova.conf
[DEFAULT]
scheduler_default_filters=...,PciPassthroughFilter
scheduler_available_filters = nova.scheduler.filters.all_filtersWhitelist PCI devices in nova‑compute.
# /etc/nova/nova.conf
[DEFAULT]
pci_passthrough_whitelist=[{"devname": "enp129s0f0", "physical_network": "sriov1"},{"devname": "enp129s0f1", "physical_network": "sriov1"}]
# restart nova‑computeCreate SR‑IOV Network, Subnet, and Port
Use OpenStack CLI to create network, subnet, and direct‑type port, then boot an instance with the port.
# net_id=$(neutron net-show sriov-net | awk '/ id / {print $4}')
# port_id=$(neutron port-create $net_id --name sriov_port --binding:vnic_type direct | awk '/ id / {print $4}')
# nova boot --flavor 2U2G --image centos7-1809-99cloud --nic port-id=$port_id test-sriovInspect the instance XML to verify the hostdev interface with VLAN tag.
<interface type='hostdev' managed='yes'>
<mac address='fa:16:3e:ff:8b:bc'/>
<driver name='vfio'/>
<source>
<address type='pci' domain='0x0000' bus='0x81' slot='0x13' function='0x7'/>
</source>
<vlan>
<tag id='20'/>
</vlan>
<alias name='hostdev0'/>
</interface>SR‑IOV and NUMA Affinity
SR‑IOV NICs have NUMA node affinity, viewable via /sys/class/net/<interface>/device/numa_node. Nova schedules instances with matching NUMA node when NUMA and CPU pinning are requested.
# cat /sys/class/net/enp129s0f0/device/numa_node
1If resources are exhausted, instance launch may fail with “insufficient compute resources”. Queens introduced PCI NUMA policies for flexible placement.
SR‑IOV and VLAN Tag
For VLAN‑type SR‑IOV networks, the VLAN tag is applied directly on the VF. The controller’s OVS bridge must map the physical network and VLAN range to allow DHCP communication.
# ip link show enp129s0f1
5: enp129s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> ... vf 14 MAC fa:16:3e:90:5e:9c, vlan 19, ...SR‑IOV and Security Groups
NOTE: SR‑IOV ports do not support security groups.
SR‑IOV Bonding
Configure two PFs on the same SR‑IOV card as physnet2:eth0 and physnet3:eth1, then create bonded VF ports with identical MAC addresses.
# On controller
openstack network create --project admin --provider-network-type vlan --provider-physical-network physnet2 --provider-segment 1337 vlan-1337-eth0
openstack network create --project admin --provider-network-type vlan --provider-physical-network physnet3 --provider-segment 1337 vlan-1337-eth1
# Create subnets without DHCP, then ports with direct vnic type and matching MAC.Cloud‑config example to set up bonding on the VM.
write_files:
- path: /etc/modules-load.d/bonding.conf
content: bonding
- path: /etc/sysconfig/network-scripts/ifcfg-bond0
content: |
BONDING_MASTER=yes
BOOTPROTO=none
DEFROUTE=yes
DEVICE=bond0
DNS1=DNS_SERVER
GATEWAY=SERVER_GATEWAY
IPADDR=SERVER_IPV4
NAME=bond0
ONBOOT=yes
PREFIX=SUBNET_PREFIX_SIZE
TYPE=Bond
- path: /etc/sysconfig/network-scripts/ifcfg-ens4
content: |
DEVICE=ens4
MASTER=bond0
ONBOOT=yes
SLAVE=yes
TYPE=Ethernet
- path: /etc/sysconfig/network-scripts/ifcfg-ens5
content: |
DEVICE=ens5
MASTER=bond0
ONBOOT=yes
SLAVE=yes
TYPE=Ethernet
runcmd:
- [rm, /etc/sysconfig/network-scripts/ifcfg-eth0]
power_state:
mode: rebootHow this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
