Master Docker Container Networking: From Bridge to Overlay and Beyond
This article explains Docker's container networking concepts—including the CNM model, native drivers, bridge, host, macvlan, none modes, port mapping, overlay SDN—as well as storage options and Compose orchestration techniques for building robust cloud‑native applications.
Container Networking
CNM Container Networking Model
CNM (Container Networking Model) is Docker's open‑source abstraction layer that enables application portability across diverse network infrastructures. It defines three core concepts: Network Sandbox, Network, and Endpoint.
Network Sandbox : a container network namespace with isolated resources such as interfaces, routes, DNS.
Network : an L2 network or L3 subnet provided by a network provider (e.g., Linux Bridge or SDN).
Endpoint : the attachment point between a network and a sandbox (e.g., veth pair).
The implementation lives in the libnetwork library, exposing drivers of four types: Native Network Driver, Native IPAM Driver, Remote Network Driver, Remote IPAM Driver.
Native Network Driver
Implemented by Docker Engine and offers several built‑in drivers.
List all networks with the Docker CLI:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
c79756cf9cde bridge bridge local
204025a5abbc host host local
9b9024f5ac40 macvlan macvlan local
6478888548d8 none null local
p2e02u1zhn8x overlay overlay swarmSCOPE indicates host‑local or swarm‑wide networks.
Linux Bridge Mode
Docker creates a default bridge (docker0) on the host. When a container uses this network Docker creates a network namespace, allocates an IP from the bridge subnet, and connects the container via a veth pair.
$ ip a
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:46:c3:00:eb brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0Custom bridges can be created with
docker network create -d bridge --subnet 10.0.0.0/24 my_bridgeand used by containers.
Host Mode
Containers share the host’s network namespace, gaining direct access to host interfaces but losing isolation.
MACVLAN Mode
Creates virtual NICs that appear as separate MAC/IP devices on the host LAN, allowing containers to use the same subnet as external devices.
Example creation:
# Create MACVLAN network bound to eth0
docker network create -d macvlan --subnet 192.168.0.0/24 --gateway 192.168.0.1 -o parent=eth0 mvnetNone Mode
Provides an isolated namespace with only a loopback interface; the container has no network connectivity.
$ docker run -d --network none --name box3 busyboxPort Mapping
Expose container services to the outside world using -p / -P, which configures NAT on the host.
# Custom mapping
docker run -d -p 8888:80 nginx:latestOverlay SDN
Docker Swarm can create overlay networks for multi‑host clusters, using VxLAN under the hood.
$ docker swarm init
$ docker network create -d overlay --attachable ovnetContainer Storage
HostOS Directory
Mount a host directory into a container with -v /data:/usr/share/nginx/html.
Docker Volume
Abstracts storage back‑ends; manage with docker volume commands (list, create, inspect, remove).
Data Container
A container used solely as a data volume source for other containers.
Container Orchestration
Docker Compose defines multi‑container applications in a YAML file.
Network Mode
Specify the network driver (e.g., bridge) via network_mode in the compose file.
Port Mapping
Define ports under ports; not effective with network_mode: host.
Service Startup Order
Use depends_on to declare dependencies, combined with scripts like wait_for_it.sh for readiness checks.
Container Environment Variables
Inject variables with the environment field or via a .env file.
Rebuilding a Specific Container
Stop, remove, and restart a single service with docker-compose commands.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
