Implementing Flat, VLAN, and VXLAN Networks with Open vSwitch and Docker
This article demonstrates how to build flat, VLAN, and VXLAN networking topologies using Open vSwitch integrated with Docker containers, covering architecture concepts, required host configurations, step‑by‑step command sequences, and connectivity testing across single‑host and multi‑host environments.
The article is the third part of the "Open vSwitch Full Analysis" series, focusing on practical implementations of flat, VLAN, and VXLAN networks by combining Open vSwitch (OVS) with Docker containers.
Flat Mode
Flat networking creates a single L2 broadcast domain where all VMs or containers share the host's physical network without VLAN tags. It requires the host's physical NIC to be attached directly to an OVS bridge, and the bridge may hold an IP address for management.
Configuration steps (run on the host whose business NIC is ens33 with IP 192.168.159.188 and gateway 192.168.159.2 ):
1 ovs-vsctl add-br ovs-docker0
2 ifconfig ovs-docker0 192.168.159.188 up
3 ifconfig ens33 0 up
4 ovs-vsctl add-port ovs-docker0 ens33
5 route add default gw 192.168.159.2After creating the bridge, launch containers without network settings and attach them to the OVS bridge:
# Host 1
1 docker run -itd --net=none --name b1 busybox:latest
2 docker run -itd --net=none --name b2 busybox:latest
# Host 2
1 docker run -itd --net=none --name b3 busybox:latest
2 docker run -itd --net=none --name b4 busybox:latestAttach containers to the bridge:
1 ovs-docker add-port ovs-docker0 eth1 39b16ae53286 --ipaddress=192.168.159.100/24 --gateway=192.168.159.2
2 ovs-docker add-port ovs-docker0 eth1 fd3431210b71 --ipaddress=192.168.159.101/24 --gateway=192.168.159.2Repeat the same commands on the second host with its own container IDs. Verify connectivity to the gateway, the Internet, and inter‑container communication both within and across hosts.
VLAN Mode
VLAN (Virtual Local Area Network) logically partitions a Layer‑2 network using 802.1Q tags. In OVS, tagging is applied on bridge ports, allowing containers to send and receive tagged traffic while the host’s business NIC operates in trunk mode.
Key points:
Management port must have an IP for SSH access.
Business NIC (e.g., ens33 ) is added as a trunk port to the OVS bridge.
Containers are attached as access ports with specific VLAN IDs.
Sample configuration on each host:
1 ovs-vsctl add-br ovs-docker0
2 ovs-vsctl add-port ovs-docker0 ens33
3 ifconfig ovs-docker upLaunch containers (same as flat mode) and attach them with VLAN tags:
# Host 1
1 docker run -itd --net=none --name b1 busybox:latest
2 docker run -itd --net=none --name b2 busybox:latest
3 ovs-docker add-port ovs-docker0 eth1
--ipaddress=192.168.100.2/24 --gateway=192.168.100.1
4 ovs-docker add-port ovs-docker0 eth1
--ipaddress=192.168.200.2/24 --gateway=192.168.200.2
5 ovs-docker set-vlan br-int eth1
100
6 ovs-docker set-vlan br-int eth1
200
# Host 2 (similar commands with its own container IDs)Configure the external L3 switch with gateways 192.168.100.1 and 192.168.200.1 so containers can reach the Internet and communicate across hosts.
VXLAN Mode
VXLAN (Virtual Extensible LAN) provides overlay networking by encapsulating Ethernet frames inside UDP packets, enabling up to 16 million logical networks without relying on physical VLAN infrastructure.
Implementation steps:
# Host 1 (IP 192.168.159.188)
1 ovs-vsctl add-br ovs-docker0
2 ovs-vsctl add-port ovs-docker0 vxlan -- set interface vxlan type=vxlan options:remote_ip=192.168.159.200
# Host 2 (IP 192.168.159.200)
1 ovs-vsctl add-br ovs-docker0
2 ovs-vsctl add-port ovs-docker0 vxlan -- set interface vxlan type=vxlan options:remote_ip=192.168.159.188Start containers on both hosts (same commands as before) and attach them to the OVS bridge with IP configuration:
1 ovs-docker add-port ovs-docker0 eth1
--ipaddress=192.168.100.2/24 --gateway=192.168.100.1
2 ovs-docker add-port ovs-docker0 eth1
--ipaddress=192.168.100.3/24 --gateway=192.168.100.1After configuring, verify the overlay network by checking bridge status and testing end‑to‑end connectivity between containers across the two hosts.
The article concludes with screenshots of successful connectivity tests for each mode.
Cloud Native Technology Community
The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.