Cloud Native 9 min read

Overview of Docker Network Extensions: Libnetwork, Pipework, Socketplane, Weave, Flannel, and Tinc

This article reviews six Docker networking projects—Libnetwork, Pipework, Socketplane, Weave, Flannel, and Tinc—explaining their architectures, key concepts, and how they extend Docker's native networking to meet security and advanced functionality requirements.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
Overview of Docker Network Extensions: Libnetwork, Pipework, Socketplane, Weave, Flannel, and Tinc

In the previous article "Docker Native Networking and Implementation Principles" we discussed Docker's native network model, which enables container‑to‑container and host‑to‑container communication while preserving port mapping and linking. The earlier piece "How OpenStack Nova Integrates with Hypervisor" also mentioned Nova's Docker integration, illustrated in the diagram below.

However, Docker's native networking can be limited for security‑sensitive or specialized scenarios, prompting many projects to extend Docker's networking capabilities. This article focuses on six such projects and their solutions.

Libnetwork Overview

Libnetwork is a new networking stack being developed by Docker, merging libcontainer and Docker Engine networking code. It introduces the Container Network Model (CNM) and provides a consistent programming API and network abstraction, backed by partners such as Cisco, IBM, Joyent, Microsoft, Rancher, VMware, and Weave.

Libnetwork's model includes three core concepts: Network Sandbox, Endpoint, and Network.

The Network Sandbox provides an isolated environment for a container's network configuration. An Endpoint is the interface used for communication on a specific network; each Endpoint belongs to a single Network Sandbox, though multiple Endpoints can coexist within the same sandbox. A Network is a uniquely identifiable group of Endpoints that can communicate with each other, while endpoints in different Networks remain isolated. This allows creation of completely separate Frontend and Backend networks.

Pipework Overview

Pipework, created by Docker developers as a shell script, simplifies Docker network configuration. It automates many advanced networking tasks but offers a limited feature set.

Pipework first checks for the existence of a br0 bridge; if absent, it creates one. It creates an Open vSwitch bridge when the name starts with "ovs" or a Linux bridge when it starts with "br". It then creates a veth pair to provide a NIC for the container and attaches it to br0.

Using docker inspect, Pipework obtains the container's PID, links the container's network namespace to /var/run/netns/, and thus enables configuration with the host's ip netns command. The veth pair is placed in the container (as eth1) and the bridge. Finally, Pipework configures the new eth1 IP address; if a gateway is specified, it rewrites the default routes of eth0 and docker0 so outbound traffic exits via eth1.

Socketplane Overview

Socketplane, originally a startup later acquired by Docker, wraps Docker commands to intercept and modify client requests, providing network security and management features.

Socketplane relies on Open vSwitch and Consul. It acts as a virtual switch for low‑level communication, while Consul handles message synchronization and service discovery. By abstracting VLANs, VXLANs, tunnels, and TEPs, Socketplane integrates with OVS, supports multiple networks, and provides distributed IP address management.

Weave Overview

Weave consists of a user‑space shell script and a Weave virtual router container deployed on each host. The virtual routers interconnect, forming a flat network across hosts.

Weave intercepts IP packets from ordinary containers, encapsulates them in UDP, and forwards them to the corresponding containers on other hosts, thereby enabling cross‑host container communication.

Flannel Overview

Flannel, originally named Rudder and developed by the CoreOS team, was designed to provide each host with a shared, complete network subnet, primarily for use with Google Kubernetes but also useful for simplifying port mapping and network management in other contexts.

Flannel’s design is similar to Open vSwitch; it replaces Docker’s default bridge with its own. Unlike OVS, Flannel wraps IP packets in soft UDP. Configuration is stored in an etcd cluster, and the Flannel daemon typically runs on three hosts. When Docker starts, it dynamically allocates an IP from the host’s subnet.

Tinc Overview

Tinc is a lightweight, open‑source VPN solution that creates encrypted tunnels, providing a transparent private network for Docker containers.

Although Docker’s native network model already supports internal and external services via virtual interfaces, subnets, NAT tables, and iptables, additional projects are needed to deliver advanced configuration options and stronger security controls.

Warm Tip: Search for “ICT_Architect” or scan the QR code below to follow the public account and receive more valuable content.

DockerContainer NetworkingflannellibnetworkPipeworkTincWeave
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.