Cloud Native 28 min read

Understanding Single‑Host Container Networking with Linux Namespaces, veth, Bridges and iptables

This tutorial explains how to isolate, virtualize and connect container network stacks on a single Linux host using network namespaces, virtual Ethernet pairs, a Linux bridge, routing, NAT and iptables rules, and shows how to expose container services to the external world.

Cloud Native Technology Community
Cloud Native Technology Community
Cloud Native Technology Community
Understanding Single‑Host Container Networking with Linux Namespaces, veth, Bridges and iptables

Using containers can feel like magic: they are easy for those who understand the underlying Linux primitives, but a nightmare for newcomers. This article demystifies single‑host container networking by answering four key questions: how to virtualize network resources so each container thinks it has an exclusive network, how containers can coexist without interfering, how a container can reach the external network, and how to publish container ports to the host.

The solution relies on well‑known Linux features: network namespaces, virtual Ethernet devices (veth), a Linux bridge, IP routing and NAT. No additional code is required beyond standard command‑line tools.

1. Inspect the host network stack

# vagrant init centos/8
vagrant up
vagrant ssh
uname -a

Run a simple script to list devices, routes and iptables rules:

#!/usr/bin/env bash
echo "> Network devices"
ip link

echo -e "\n> Route table"
ip route

echo -e "\n> Iptables rules"
iptables --list-rules

After creating a network namespace, the script shows only the loopback device, confirming isolation.

2. Create a network namespace and enter it

sudo ip netns add netns0
ip netns list   # shows netns0
sudo nsenter --net=/var/run/netns/netns0 bash

Inside the namespace the ip link output shows only lo , proving the network stack is isolated.

3. Connect the namespace to the host with a veth pair

sudo ip link add veth0 type veth peer name ceth0
sudo ip link set ceth0 netns netns0
sudo ip link set veth0 up

Assign IP addresses:

sudo ip addr add 172.18.0.11/16 dev veth0
sudo nsenter --net=/var/run/netns/netns0 ip addr add 172.18.0.10/16 dev ceth0

Ping between the two ends to verify connectivity.

4. Scale to multiple containers – use a Linux bridge

sudo ip link add br0 type bridge
sudo ip link set br0 up
sudo ip link set veth0 master br0
sudo ip link set veth1 master br0   # repeat for a second container

Assign a bridge IP so the host can route traffic:

sudo ip addr add 172.18.0.1/16 dev br0

Now containers can ping each other and the host bridge IP.

5. Enable forwarding and NAT for external access

sudo bash -c 'echo 1 > /proc/sys/net/ipv4/ip_forward'
sudo iptables -t nat -A POSTROUTING -s 172.18.0.0/16 ! -o br0 -j MASQUERADE

Test connectivity to the Internet from a container:

sudo nsenter --net=/var/run/netns/netns0 ping -c 2 8.8.8.8

Finally, publish a container service (e.g., a Python HTTP server) on the host port 5000:

# Inside netns0
python3 -m http.server --bind 172.18.0.10 5000
# DNAT rules on the host
sudo iptables -t nat -A PREROUTING -d 10.0.2.15 -p tcp --dport 5000 -j DNAT --to-destination 172.18.0.10:5000
sudo iptables -t nat -A OUTPUT -d 10.0.2.15 -p tcp --dport 5000 -j DNAT --to-destination 172.18.0.10:5000

Load the bridge‑filter module so iptables can see bridged traffic:

sudo modprobe br_netfilter

After these steps, curl 10.0.2.15:5000 returns the container’s web page, demonstrating full end‑to‑end connectivity.

The article concludes by noting that while this is one common approach, many other CNI plugins (e.g., Cilium, Kube‑OVN) build on the same Linux networking primitives, reinforcing that container networking is fundamentally a form of Linux network virtualization.

NATBridgeiptablesContainer NetworkingLinux NamespacesVeth
Cloud Native Technology Community
Written by

Cloud Native Technology Community

The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.