Cloud Native 30 min read

Unlocking Container Networking: Simple Linux Tools for Isolated Networks

This article demystifies single‑host container networking by explaining network namespaces, virtual Ethernet (veth) pairs, Linux bridges, IP routing, NAT masquerading, port publishing, and the differences between Docker and rootless container networking, all with practical command‑line examples.

MaGe Linux Operations
MaGe Linux Operations
MaGe Linux Operations
Unlocking Container Networking: Simple Linux Tools for Isolated Networks

Using containers can feel like magic. For those who understand the underlying principles, containers are powerful, but for newcomers they can seem like a nightmare. Fortunately, we have studied container technology for a long time and discovered that containers are merely isolated Linux processes; they do not require images to run, and building images also involves running containers.

What problems does container networking solve?

How to virtualize network resources so a container thinks it has an exclusive network?

How can containers coexist peacefully without interfering with each other while still communicating?

How can a container access the external world (e.g., the Internet) from inside?

How can the external world access a specific container on a host (port publishing)?

The result is simple: single‑host container networking is just a combination of known Linux features:

Network namespaces

Virtual Ethernet devices (veth)

Virtual network bridges

IP routing and Network Address Translation (NAT)

No code is required to make this network magic happen.

Prerequisites

Any Linux distribution works. All examples are executed on a Vagrant CentOS 8 VM.

$ vagrant init centos/8
$ vagrant up
$ vagrant ssh

[vagrant@localhost ~]$ uname -a
Linux localhost.localdomain 4.18.0-147.3.1.el8_1.x86_64

We will use a container solution such as Docker or Podman and focus on the basic concepts with the simplest tools.

Network namespace isolation

A Linux network namespace is a separate copy of the network stack with its own routes, firewall rules, and devices. Creating a network namespace isolates only the network stack, not the whole container.

$ sudo ip netns add netns0
$ ip netns
netns0

To work inside the namespace, use nsenter:

$ sudo nsenter --net=/var/run/netns/netns0 bash
# new bash process runs inside netns0
$ sudo ./inspect-net-stack.sh

The output shows a completely different network stack with only a loopback device.

Connecting containers with veth pairs

Virtual Ethernet devices come in pairs and can connect two namespaces. Create a pair:

$ sudo ip link add veth0 type veth peer name ceth0

Move one end into the container namespace: $ sudo ip link set ceth0 netns netns0 Assign IP addresses and bring the interfaces up:

$ sudo ip link set veth0 up
$ sudo ip addr add 172.18.0.11/16 dev veth0
$ sudo nsenter --net=/var/run/netns/netns0
$ ip link set lo up
$ ip link set ceth0 up
$ ip addr add 172.18.0.10/16 dev ceth0

Ping tests confirm connectivity between the two ends.

Connecting multiple containers with a bridge

When multiple containers share the same IP subnet, routing conflicts arise. A Linux bridge works like a virtual switch at L2 and resolves this.

$ sudo ip link add br0 type bridge
$ sudo ip link set br0 up

Attach the host‑side veth devices to the bridge:

$ sudo ip link set veth0 master br0
$ sudo ip link set veth1 master br0

Assign an IP to the bridge interface: $ sudo ip addr add 172.18.0.1/16 dev br0 Now containers can ping each other and the bridge IP, and the host can reach them.

Enabling external connectivity (routing & NAT)

Enable IP forwarding on the host:

sudo bash -c 'echo 1 > /proc/sys/net/ipv4/ip_forward'

Add a MASQUERADE rule so container traffic appears to come from the host:

$ sudo iptables -t nat -A POSTROUTING -s 172.18.0.0/16 ! -o br0 -j MASQUERADE

After this, containers can reach the Internet (e.g., ping 8.8.8.8).

Port publishing

Expose a container service on the host’s external IP:

# Inside container netns0, run a server
$ python3 -m http.server --bind 172.18.0.10 5000

# DNAT external traffic to the container
$ sudo iptables -t nat -A PREROUTING -d 10.0.2.15 -p tcp --dport 5000 -j DNAT --to-destination 172.18.0.10:5000
$ sudo iptables -t nat -A OUTPUT -d 10.0.2.15 -p tcp --dport 5000 -j DNAT --to-destination 172.18.0.10:5000

Now curl 10.0.2.15:5000 returns the container’s HTTP response.

Understanding Docker network drivers

Docker’s --network host mode shares the host’s network namespace, while --network none provides only a loopback interface. The default --network bridge mode corresponds to the bridge setup described above.

Rootless containers

Rootless containers (e.g., Podman) cannot manipulate network namespaces directly because creating veth pairs requires root. They use slirp4netns to provide user‑space networking, which has limitations such as no raw socket capability for ping.

Conclusion

The presented approach is one of the most common ways to organize container networking, relying heavily on Linux virtual networking primitives. Other solutions exist via official or third‑party plugins, but all depend on the same underlying Linux technologies, making containerization essentially a form of virtualization.

References:

https://docs.docker.com/network/#network-drivers

https://www.redhat.com/sysadmin/container-networking-podman

https://github.com/rootless-containers/slirp4netns

https://developers.redhat.com/blog/2018/10/22/introduction-to-linux-interfaces-for-virtual-networking/

DockerBridgeiptablesContainer NetworkingLinux NamespacesVethrootless containers
MaGe Linux Operations
Written by

MaGe Linux Operations

Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.