Cloud Native 15 min read

How Docker Leverages Linux Network Namespaces, Bridges, and Veth Pairs for Container Isolation

This article explains the core Linux networking technologies—Network Namespace, bridge devices, Veth pairs, iptables/netfilter, and routing—that Docker relies on to provide isolated, configurable network stacks for containers, and includes practical command examples.

MaGe Linux Operations
MaGe Linux Operations
MaGe Linux Operations
How Docker Leverages Linux Network Namespaces, Bridges, and Veth Pairs for Container Isolation

Docker’s technology depends on the evolution of Linux kernel virtualization, using networking features such as Network Namespace, Veth pairs, iptables/netfilter, bridges, and routing. The following sections detail these foundational technologies before moving on to full container networking.

Network Namespace

To support multiple instances of the network protocol stack, Linux introduces Network Namespace, isolating independent protocol stacks in separate namespaces that cannot communicate with each other. Global variables are made namespace‑specific, and the kernel implicitly uses the namespace’s variables, making Network Namespace transparent to applications that do not require special handling.

When a new Network Namespace is created and a process is attached, all network stack variables are stored in a data structure private to that process group, avoiding conflicts with other groups.

Docker uses Network Namespace to achieve container network isolation. If a container is run with the host network stack (e.g., docker run -d --net=host --name c_name i_name), it shares the host’s ports, which can cause conflicts. docker run -d --net=host --name c_name i_name Typically, containers are given their own IP and ports via Network Namespace, raising the question of how isolated containers communicate with each other.

Net Bridge

In Linux, a bridge acts like a data‑link‑layer device that forwards frames based on MAC addresses. Docker automatically creates a default bridge (docker0); any interface connected to docker0 can communicate through it.

Bridge Details

A bridge is a layer‑2 virtual device that connects multiple network interfaces, learns source MAC addresses, and forwards frames to the appropriate port. Unknown destinations are broadcast to all ports except the source.

Bridges maintain a MAC address table with a timeout (default 5 minutes) to handle topology changes.

Linux implements bridges as virtual net devices that can bind several Ethernet interfaces and may have an IP address.

In the diagram, bridge br0 binds eth0 and eth1; the upper layers see only br0. Frames received on eth0 or eth1 are processed by the bridge code, which decides whether to forward, drop, or pass them up the stack.

Common Bridge Commands

Docker creates and manages the bridge automatically, but you can manually manipulate bridges: brctl addbr mybr0 Attach a physical interface to the bridge: brctl addif mybr0 eth0 Physical interfaces on a bridge operate at layer 2 and do not need IP addresses: ip addr flush dev eth0 Assign an IP address to the bridge itself:

ip addr add 192.168.1.1/24 dev mybr0

Veth Pair

After Docker creates the docker0 bridge, containers need a way to connect to it. Veth pairs provide a virtual cable between two Network Namespaces, appearing as two linked virtual interfaces.

Data sent on one end of a Veth pair appears on the peer, even if they reside in different namespaces.

Veth Pair Commands

Create a Veth pair: ip link add veth0 type veth peer name veth1 Show the pair: ip link show Move one peer into another namespace: ip link set veth1 netns ns1 Inspect the peer inside the namespace: ip netns exec ns1 ip link show Assign IP addresses and bring the interfaces up:

ip netns exec ns1 ip addr add 10.1.1.1/24 dev veth1
ip addr add 10.1.1.2/24 dev veth0
ip netns exec ns1 ip link set dev veth1 up
ip link set dev veth0 up

Test connectivity:

ip netns exec ns1 ping 10.1.1.2

Iptables/Netfilter

Linux provides hook points in the network stack that allow user‑space (iptables) and kernel‑space (netfilter) to filter, modify, or drop packets.

Netfilter runs in kernel mode, while iptables runs in user mode to manage netfilter rule tables.

Rule Tables

RAW

MANGLE

NAT

FILTER

RAW has the highest priority, FILTER the lowest. Different hook points use different tables; for example, INPUT does not need FILTER rules because the packet is already destined for the local stack.

Route

Linux maintains routing tables to decide where to forward IP packets. The kernel uses the table to determine the next hop based on destination IP.

If a packet’s destination matches a local address, it is delivered to the appropriate transport protocol; otherwise, it is forwarded according to the routing entries or dropped.

Route Table

Linux has at least two tables: LOCAL (for local address handling) and MAIN (for general IP forwarding). The LOCAL table can be inspected with: ip route show table local type local List all routes:

ip route list

Summary

We have introduced the core components required for Docker container networking: Network Namespace, bridge devices, Veth pairs, iptables/netfilter, and routing. Subsequent sections will build on this foundation to detail Docker’s full container network implementation.

Link: https://www.cnblogs.com/sally-zhou/p/13424208.html Author: Mr_Zack
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Dockeriptablescontainer networkingNetwork Namespaceveth-pairLinux Bridge
MaGe Linux Operations
Written by

MaGe Linux Operations

Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.