Cloud Native 11 min read

Understanding Docker Networking: veth Pairs, Bridge, and Custom Networks

This article explains how Docker isolates containers using Linux veth pairs, the default docker0 bridge, the deprecated --link method, and how to create and use custom bridge networks for reliable service‑name communication and cross‑network connectivity.

Efficient Ops
Efficient Ops
Efficient Ops
Understanding Docker Networking: veth Pairs, Bridge, and Custom Networks

Docker Network Principles

Containers run in isolated environments similar to small Linux systems; Docker uses Linux veth pairs to connect each container to the host network.

1. Linux veth pair

A veth pair consists of two virtual network interfaces that are linked together, one attached to the container’s network stack and the other to the host.

veth pair connects two interfaces veth0 and veth1.

2. Understanding docker0

On the host you can see three interfaces:

<code>lo      127.0.0.1      # loopback
eth0    172.31.179.120 # host IP
docker0 172.17.0.1     # Docker bridge</code>
docker0 is created when Docker is installed and bridges containers to the host.

Example: start a Tomcat container.

<code>[root@host]# docker pull tomcat
[root@host]# docker run -d -p 8081:8080 --name tomcat01 tomcat</code>

After starting, a new veth pair appears (e.g., vethad33778@if200) and the container receives IP 172.17.0.2.

Each new container adds a veth pair linked to docker0.

3. Container linking – --link

Containers cannot reach each other directly; using

--link

adds the target container’s name to

/etc/hosts

.

<code>[root@host]# docker run -d -p 8083:8080 --name tomcat03 --link tomcat02 tomcat
[...]
[root@host]# docker exec -it tomcat03 cat /etc/hosts
...
172.17.0.3 tomcat02 e4060ea4ee28   # linked container name
172.17.0.4 db75c42f7f7f</code>
--link is one‑way; the source sees the target, but not vice‑versa.

4. Custom networks (recommended)

docker0 is the default bridge network.

It does not support DNS name resolution.

--link works but is deprecated.

Docker provides three built‑in network drivers: bridge, host, none.

<code># Create a user‑defined bridge network
docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet</code>

Run containers on the custom network:

<code>docker run -d -p 8081:8080 --name tomcat-net-01 --net mynet tomcat
docker run -d -p 8082:8080 --name tomcat-net-02 --net mynet tomcat</code>

Containers can ping each other by name:

<code>docker exec -it tomcat-net-01 ping tomcat-net-02</code>
Service‑name resolution works without --link.

5. Network connectivity across networks

To connect a container to another network, use

docker network connect

:

<code>docker network connect mynet tomcat01</code>

This attaches the container to the specified network, enabling cross‑network communication.

6. Summary

veth pair provides a paired virtual network interface.

Docker uses the default bridge network docker0.

docker0 acts as a router linking containers to the host.

Custom bridge networks allow service‑name resolution and avoid IP changes.

Network connect bridges containers across isolated networks.

dockerNetworkingBridgeVethcustom networkcontainer linking
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.