Operations 11 min read

How Docker Manages Container Networking: veth Pairs, Bridge, and Custom Networks

This article explains Docker's networking fundamentals, covering Linux veth pairs, the default docker0 bridge, container linking with --link, the creation and use of custom bridge networks, and how to connect containers across isolated networks for reliable service communication.

Efficient Ops
Efficient Ops
Efficient Ops
How Docker Manages Container Networking: veth Pairs, Bridge, and Custom Networks

1. Linux veth pair

veth pair is a virtual network device that appears in pairs; one end connects to the network stack and the other end connects to its peer, forming a bidirectional link. Docker uses a veth pair to connect each container to the host network.

2. Understanding docker0

On a typical host you have three network interfaces:

<code>lo      127.0.0.1      # loopback
eth0    172.31.179.120   # host IP (e.g., Alibaba Cloud)
docker0 172.17.0.1       # Docker bridge</code>

docker0 is created when Docker is installed and acts as a bridge between containers and the host. Each container gets an IP in the 172.17.0.0/16 subnet and is linked to docker0 via its own veth pair.

3. Container linking – --link

When a container is started, Docker creates a new veth pair and attaches one end to docker0. Using

--link

you can expose another container’s IP and hostname inside the new container, but the link is one‑way.

<code># Start first container
docker run -d -p 8081:8080 --name tomcat01 tomcat
# Start second container
docker run -d -p 8082:8080 --name tomcat02 tomcat
# Start third container with link to tomcat02
docker run -d -p 8083:8080 --name tomcat03 --link tomcat02 tomcat</code>

Inside

tomcat03

the

/etc/hosts

file contains both its own IP and the hostname

tomcat02

, allowing name‑based communication.

4. Custom networks (recommended)

The default bridge network (docker0) has limitations: no DNS name resolution across containers and links are deprecated. Docker provides three built‑in drivers (bridge, host, none) and allows you to create user‑defined bridge networks.

<code># Create a custom bridge network named mynet
docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
# Run containers on the custom network
docker run -d -p 8081:8080 --name tomcat-net-01 --net mynet tomcat
docker run -d -p 8082:8080 --name tomcat-net-02 --net mynet tomcat</code>

Containers on the same custom network can resolve each other by name without using

--link

.

5. Network connectivity across networks

Containers attached to different bridge networks cannot communicate directly. Docker’s

network connect

command can attach a container to an additional network, enabling cross‑network communication.

<code># Connect an existing container to the custom network
docker network connect mynet tomcat01</code>

After the connection,

tomcat01

can ping containers in

mynet

by IP or by service name.

6. Summary

veth pairs provide a paired virtual interface for each container.

Docker’s default bridge network is docker0.

docker0 functions like a router, linking containers to the host.

Use custom bridge networks for stable DNS‑based service discovery and to avoid IP changes.

Network connectivity can be achieved by attaching containers to multiple networks.

DockerNetworkingBridgeVethcustom networkcontainer linking
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.