Cloud Native 29 min read

Master Single-Host Container Networking with Namespaces, veth, Bridges & NAT

This guide walks through building isolated single‑host container networks on Linux using network namespaces, virtual Ethernet pairs, bridges, routing and NAT, showing step‑by‑step commands to create, connect, and expose containers, troubleshoot connectivity, and understand Docker’s networking models.

Efficient Ops
Efficient Ops
Efficient Ops
Master Single-Host Container Networking with Namespaces, veth, Bridges & NAT

Introduction

Using containers can feel like magic. For those who understand the underlying principles, containers are powerful, but for newcomers they can be a nightmare. Fortunately, we have studied container technology for a long time and know that containers are just isolated Linux processes; they do not require images to run, and building images also involves running containers.

Now we address single‑host container networking. This article answers:

How to virtualize network resources so each container thinks it has an exclusive network.

How containers can coexist without interfering and still communicate.

How to access the external world from inside a container.

How to expose a container to the host (port publishing).

The result is a simple combination of known Linux features:

Network namespaces

Virtual Ethernet devices (veth)

Virtual network switch (bridge)

IP routing and NAT

Prerequisites

Any Linux distribution works. All examples run on a Vagrant CentOS 8 VM.

<code>$ vagrant init centos/8
$ vagrant up
$ vagrant ssh
[vagrant@localhost ~]$ uname -a
Linux localhost.localdomain 4.18.0-147.3.1.el8_1.x86_64</code>

We use Docker or Podman as the container runtime, focusing on basic concepts with the simplest tools.

Network Namespace Isolation

Linux network stack consists of devices, routing rules, and netfilter hooks (iptables). A quick script to inspect the stack:

<code>#!/usr/bin/env bash
echo "> Network devices"
ip link
echo -e "\n> Route table"
ip route
echo -e "\n> Iptables rules"
iptables --list-rules</code>

Before running the script, add a custom iptables chain:

<code>$ sudo iptables -N ROOT_NS</code>

Running the script shows the current network devices, routes, and iptables rules, confirming each container will have its own isolated stack.

Connecting Containers with veth

veth devices come in pairs and act as a tunnel between namespaces.

<code>$ sudo ip link add veth0 type veth peer name ceth0</code>

After creation,

veth0

stays in the root namespace while

ceth0

is moved to

netns0

:

<code>$ sudo ip link set ceth0 netns netns0</code>

Bring the devices up and assign IPs:

<code>$ sudo ip link set veth0 up
$ sudo ip addr add 172.18.0.11/16 dev veth0
$ sudo nsenter --net=/var/run/netns/netns0
# inside netns0
$ ip link set lo up
$ ip link set ceth0 up
$ ip addr add 172.18.0.10/16 dev ceth0</code>

Ping tests confirm connectivity between the two ends of the veth pair.

Using a Linux Bridge

When multiple containers share the same IP subnet, simple veth pairs cause routing conflicts. A bridge (L2 switch) solves this.

<code>$ sudo ip link add br0 type bridge
$ sudo ip link set br0 up
$ sudo ip link set veth0 master br0
$ sudo ip link set veth1 master br0</code>

After attaching both veth pairs to the bridge, containers can ping each other directly.

Connecting to the External World (Routing & Masquerading)

Assign an IP to the bridge interface so the host can route traffic to containers:

<code>$ sudo ip addr add 172.18.0.1/16 dev br0</code>

Enable IP forwarding on the host:

<code>$ sudo bash -c 'echo 1 > /proc/sys/net/ipv4/ip_forward'</code>

Set up NAT so container‑originated packets appear to come from the host:

<code>$ sudo iptables -t nat -A POSTROUTING -s 172.18.0.0/16 ! -o br0 -j MASQUERADE</code>

Now containers can reach the Internet and the host can reach them via their private IPs.

Port Publishing

Run a service inside a container (e.g., a simple HTTP server) and access it from the host:

<code>$ sudo nsenter --net=/var/run/netns/netns0
$ python3 -m http.server --bind 172.18.0.10 5000</code>

From the host:

<code>$ curl 172.18.0.10:5000</code>

To expose the service externally, publish the host’s IP and port (Docker does this automatically with its bridge driver).

Understanding Docker Network Drivers

Docker offers three main drivers:

host : No network namespace isolation; container shares the host stack.

none : Only a loopback interface is present.

bridge (default): Implements the veth‑bridge model described above.

Rootless Containers

Rootless containers (e.g., Podman) cannot manipulate network namespaces directly because creating veth devices requires root. They rely on

slirp4netns

, which provides user‑mode networking via a TAP device.

Conclusion

The presented approach—network namespaces, veth pairs, a Linux bridge, routing, and NAT—is one of the most common ways to organize container networking on a single host. Many other solutions exist, often implemented as plugins, but they all depend on Linux’s network virtualization primitives.

References

Docker network drivers

Container networking with Podman

slirp4netns project

Linux virtual networking intro

Original article: https://iximiuz.com/en/posts/container-networking-is-simple/

DockerNATBridgeContainer NetworkingLinux NamespacesVeth
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.