Cloud Native 21 min read

Understanding Docker Container Networking: Modes, Overlay, and Beyond

This article explains Docker's container networking fundamentals, covering single‑host communication modes (host, bridge, none, container‑shared, custom), their advantages and drawbacks, and then delves into cross‑host solutions such as overlay, Weave, and Calico, comparing their architectures and performance implications.

Open Source Linux
Open Source Linux
Open Source Linux
Understanding Docker Container Networking: Modes, Overlay, and Beyond

Overview

Since Docker containers appeared, container network communication has been a focus and a pressing production need. Communication can be divided into two aspects: communication between containers on a single host and communication across hosts. This article analyzes the principles of both to help users better use Docker.

1. Docker single‑host container communication

By controlling network namespaces, Docker creates isolated network environments for containers. Containers have independent network stacks, can share the host or other containers' namespaces, and support five official network modes:

bridge: the default mode, creates an independent network namespace with its own network stack.

host: uses the host's network namespace directly.

none: creates an isolated namespace but provides no network configuration, leaving only the loopback interface.

container: similar to host mode, but shares the network namespace with a specified container.

custom: introduced in Docker 1.9, allows third‑party drivers or user‑defined bridge networks for isolation.

The following diagram compares these modes:

North‑south traffic refers to container‑host external access; east‑west traffic refers to communication between containers on the same host.

1.1 host mode

Containers share the host's network namespace, so the container IP is the host IP. They can use any host network interface, and ports are the same as the host's. This provides high performance but removes isolation, causing potential conflicts and stability risks.

Containers lose isolation and compete for the host's network stack.

Port resources are shared with the host and other containers.

1.2 bridge mode

Bridge is Docker's default mode. Each container gets an independent network stack and connects to the host's docker0 bridge, enabling communication with the host and external networks via NAT port binding.

Example command: docker run -tid --name db -p 3306:3306 MySQL Inspecting NAT rules shows a DNAT entry:

DNAT tcp — 0.0.0.0/0 0.0.0.0/0 tcp dpt:3306 to:172.17.0.5:3306

In bridge mode the container IP (e.g., 172.17.0.5) is reachable only through the host's port, which can cause port competition and performance overhead due to NAT.

1.3 none mode

Containers have an isolated network stack but no network interfaces except lo. This minimal setup allows users to customize networking manually.

1.4 container‑shared mode

Containers share another container's network namespace, offering isolation between the shared pair and the rest of the host. This is useful for Kubernetes pods where a “infra” container provides the network namespace.

1.5 user‑defined network mode

Developers can use third‑party drivers (e.g., Calico, Weave, Open vSwitch). Docker 1.9+ includes built‑in bridge and overlay drivers. Creating a custom bridge network: docker network create bri1 Docker automatically adds isolation rules such as:

-A DOCKER-ISOLATION -i br-8dba6df70456 -o docker0 -j DROP
-A DOCKER-ISOLATION -i docker0 -o br-8dba6df70456 -j DROP

2. Docker cross‑host container communication

Early solutions included host mode, port binding, pipework, and third‑party SDN tools, each with limitations. Docker 1.9 introduced a native overlay network based on VXLAN, and libnetwork plugins allow additional implementations.

Overlay networks based on tunnels (VXLAN, Geneve, STT).

Packet‑encapsulation overlays (Weave early versions).

Layer‑3 SDN networks (Project Calico, with optional IPIP encapsulation).

Docker CNM model

CNM defines three concepts: Sandbox (container network namespace), Endpoint (virtual NIC), and Network (a set of endpoints that can communicate).

Docker native overlay model

Overlay networks require a key‑value store (Consul, etcd, Zookeeper) and unique hostnames. They rely on multicast and UDP port 4789 for traffic forwarding.

Creating an overlay network:

docker network create -d overlay overlaynet

Network namespaces can be inspected via symbolic links and docker inspect commands.

Weave network model

Weave deploys a virtual router on each host, forming a peer‑to‑peer mesh. Hosts establish TCP control connections and UDP data connections, optionally encrypted.

Traffic flow: container → veth pair → host bridge → Weave router → UDP tunnel → remote host router → bridge → container.

Calico network model

Calico is a pure Layer‑3 SDN using BGP and Linux routing, avoiding NAT or tunnels. Each host runs calico/node as a virtual router, exposing container IPs directly.

Cross‑host communication steps: container traffic reaches host namespace via veth pair, routing rules forward it to the next host, and the destination host delivers it to the target container without NAT.

Original source: https://www.cnblogs.com/ilinuxer/p/6680205.html
DockerOverlayNetwork IsolationContainer NetworkingCalicoBridge ModeWeave
Open Source Linux
Written by

Open Source Linux

Focused on sharing Linux/Unix content, covering fundamentals, system development, network programming, automation/operations, cloud computing, and related professional knowledge.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.