Cloud Native 15 min read

Understanding Linux Bridge and veth for Docker Networking

This article explains how Linux bridge and virtual Ethernet (veth) devices work together to enable communication between Docker containers, covering the creation of network namespaces, veth pairs, bridge setup, kernel implementation details, and packet forwarding processes.

Refining Core Development Skills
Refining Core Development Skills
Refining Core Development Skills
Understanding Linux Bridge and veth for Docker Networking

Linux veth devices are paired virtual network interfaces that allow Docker containers to communicate with the host or with each other. By connecting containers via veth pairs to a software bridge, multiple containers can be networked on a single host.

1. How to Use a Bridge

First, create two separate network namespaces (net1 and net2) and a veth pair for each, moving one end of each pair into its respective namespace and assigning IP addresses.

# ip netns add net1
# ip link add veth1 type veth peer name veth1_p
# ip link set veth1 netns net1
# ip netns exec net1 ip addr add 192.168.0.101/24 dev veth1
# ip netns exec net1 ip link set veth1 up
# ip netns exec net1 ip link list
# ip netns exec net1 ifconfig

Repeat the steps for net2 with veth2 , veth2_p and IP 192.168.0.102 .

2. Connecting the Two Networks

Create a bridge device and attach the remaining ends of the veth pairs to it, then assign an IP address to the bridge and bring all interfaces up.

# brctl addbr br0
# ip link set dev veth1_p master br0
# ip link set dev veth2_p master br0
# ip addr add 192.168.0.100/24 dev br0
# ip link set veth1_p up
# ip link set veth2_p up
# ip link set br0 up
# brctl show

3. Network Connectivity Test

Ping from net1 to net2 to verify communication.

# ip netns exec net1 ping 192.168.0.102 -I veth1

3. How the Bridge Is Created in the Kernel

The bridge is represented by two kernel objects: struct net_device and struct net_bridge . The function br_add_bridge allocates and registers these objects via alloc_netdev and register_netdev .

// file: net/bridge/br_if.c
int br_add_bridge(struct net *net, const char *name) {
    // allocate bridge device and set it up
    dev = alloc_netdev(sizeof(struct net_bridge), name, br_dev_setup);
    dev_net_set(dev, net);
    dev->rtnl_link_ops = &br_link_ops;
    res = register_netdev(dev);
    if (res)
        free_netdev(dev);
    return res;
}

The alloc_netdev macro expands to alloc_netdev_mqs , which allocates memory for both net_device and net_bridge in one step.

4. Adding Devices to the Bridge

When brctl addif br0 veth0 is executed, the kernel creates a net_bridge_port object for the veth interface and registers a receive handler ( br_handle_frame ) that will process incoming packets.

// file: net/bridge/br_if.c
int br_add_if(struct net_bridge *br, struct net_device *dev) {
    struct net_bridge_port *p;
    p = new_nbp(br, dev);
    err = netdev_rx_handler_register(dev, br_handle_frame, p);
    list_add_rcu(&p->list, &br->port_list);
    ...
}

5. Packet Forwarding Process

Packets received on a veth attached to a bridge are intercepted by the registered br_handle_frame handler, which updates the forwarding database and forwards the packet to the appropriate bridge port.

// file: net/bridge/br_input.c
rx_handler_result_t br_handle_frame(struct sk_buff **pskb) {
    ...
    NF_HOOK(NFPROTO_BRIDGE, NF_BR_PRE_ROUTING, skb, skb->dev, NULL,
            br_handle_frame_finish);
}

int br_handle_frame_finish(struct sk_buff *skb) {
    struct net_bridge_port *p = br_port_get_rcu(skb->dev);
    struct net_bridge *br = p->br;
    br_fdb_update(br, p, eth_hdr(skb)->h_source, vid);
    dst = __br_fdb_get(br, dest, vid);
    if (dst) {
        br_forward(dst->dst, skb, skb2);
    }
    return 0;
}

The forwarding function swaps the skb->dev to the destination port and ultimately calls dev_queue_xmit to transmit the packet.

6. Summary

Linux bridge implements a software Ethernet switch at layer‑2, allowing multiple veth interfaces to be connected and to forward packets between Docker containers without involving the IP stack. This mechanism underlies Docker's default networking model and exemplifies network virtualization using pure software.

DockerKernellinuxNetwork VirtualizationBridgeVeth
Refining Core Development Skills
Written by

Refining Core Development Skills

Fei has over 10 years of development experience at Tencent and Sogou. Through this account, he shares his deep insights on performance.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.