DPVS: High‑Performance User‑Space Load Balancer – Architecture, Features, and Deployment
DPVS is an open‑source, DPDK‑based user‑space load balancer that achieves line‑rate throughput by assigning each worker to a dedicated CPU core and NIC queue, provides lock‑free per‑CPU data structures and multiple forwarding modes including NAT64, and simplifies maintenance while supporting IPv4/IPv6 in large‑scale production deployments.
DPVS (Data Plane Virtual Switch) is an iQiYi‑developed, DPDK‑based, high‑performance four‑layer load balancer. Compared with Linux Virtual Server (LVS), DPVS offers higher throughput, richer forwarding modes, and easier maintenance because it runs in user space and bypasses the kernel network stack.
Key advantages
Higher performance: a single worker thread can process up to 2.3 Mpps; six workers can achieve line‑rate on a 10 GbE NIC (≈12 Mpps) by avoiding kernel locks, interrupts, context switches, and data copies.
More complete functionality: supports DirectRouting (DR), NAT, Tunnel, Full‑NAT, and SNAT; IPv4/IPv6, and the latest version adds NAT64 for IPv6‑to‑IPv4 translation.
Better maintainability: as a user‑space program, DPVS shortens development cycles, simplifies debugging, and speeds up bug fixes.
Since its open‑source release in October 2017, DPVS has attracted wide attention, joining the DPDK ecosystem in 2018 and gathering over a thousand community members.
Overall architecture
DPVS follows a classic Master/Worker model. The Master handles control‑plane tasks (configuration, statistics), while Workers perform the core load‑balancing, scheduling, and packet forwarding. Each Worker is bound to a dedicated CPU core, and those cores are excluded from the OS scheduler to avoid context switches and cache invalidation.
DPVS also binds NIC queues to CPUs, allowing each Worker to process a specific receive and transmit queue, achieving linear scalability with the number of CPU cores and NIC queues.
Critical data structures (connection table, neighbor table, routing table) are per‑CPU and lock‑free, eliminating contention. For globally shared tables, DPVS uses cross‑CPU lock‑free synchronization via the DPDK rte_ring library.
Because DPVS implements a lightweight user‑space protocol stack, it provides only the necessary network components (ARP/NS/NA, routing, ping, checksum verification, IP address management) while bypassing the full kernel stack.
Functional modules
Network device layer – packet I/O, VLAN, bonding, tunnel, KNI, traffic control.
Lightweight protocol stack – IPv4/IPv6 three‑layer stack with neighbor, routing, address management.
IPVS forwarding layer – connection management, scheduling algorithms, five forwarding modes (including Full‑NAT with NAT64 support).
Basic modules – timers, CPU messaging, IPC, configuration handling.
Control plane & tools – ipvsadm, keepalived, dpip, and integration with Quagga.
Typical use cases
1. Traffic balancing – DPVS can provide IPv6 Full‑NAT load balancing for a service cluster, exposing a virtual IPv6 address (VIP) to the outside while distributing traffic to multiple backend servers.
./bin/dpip -6 addr add 2001:db8::1/64 dev eth1 # VIP
./bin/dpip -6 addr add 2001:db8:10::141/64 dev eth1 # local IP
./bin/ipvsadm -At [2001:db8::1]:80 -j enable # TCP
./bin/ipvsadm -Pt [2001:db8::1]:80 -z 2001:db8:10::141 -F eth0
./bin/ipvsadm -at [2001:db8::1]:80 -r [2001:db8:11::51]:80 -b
./bin/ipvsadm -at [2001:db8::1]:80 -r [2001:db8:11::52]:80 -b
./bin/ipvsadm -at [2001:db8::1]:80 -r [2001:db8:11::53]:80 -b
./bin/ipvsadm -Au [2001:db8::1]:80 # UDP
./bin/ipvsadm -Pu [2001:db8::1]:80 -z 2001:db8:10::141 -F eth0
./bin/ipvsadm -au [2001:db8::1]:80 -r [2001:db8:11::51]:6000 -b
./bin/ipvsadm -au [2001:db8::1]:80 -r [2001:db8:11::52]:6000 -b
./bin/ipvsadm -au [2001:db8::1]:80 -r [2001:db8:11::53]:6000 -b2. NAT64 – DPVS’s Full‑NAT mode enables IPv6‑to‑IPv4 translation without changing the internal IPv4 network, allowing IPv6 clients to reach IPv4 services.
./bin/dpip -6 addr add 2001:db8::1/64 dev eth1 # VIP
./bin/dpip addr add 192.168.88.141/24 dev eth0 # local IP
./bin/dpip addr add 192.168.88.142/24 dev eth0 # local IP
./bin/ipvsadm -At [2001:db8::1]:80 -j enable # TCP
./bin/ipvsadm -Pt [2001:db8::1]:80 -z 192.168.88.141 -F eth0
./bin/ipvsadm -Pt [2001:db8::1]:80 -z 192.168.88.142 -F eth0
./bin/ipvsadm -at [2001:db8::1]:80 -r 192.168.12.51:80 -b
./bin/ipvsadm -at [2001:db8::1]:80 -r 192.168.12.53:80 -b
./bin/ipvsadm -at [2001:db8::1]:80 -r 192.168.12.54:80 -b
./bin/ipvsadm -Au [2001:db8::1]:80 # UDP
./bin/ipvsadm -Pu [2001:db8::1]:80 -z 192.168.88.141 -F eth0
./bin/ipvsadm -Pu [2001:db8::1]:80 -z 192.168.88.142 -F eth0
./bin/ipvsadm -au [2001:db8::1]:80 -r 192.168.12.51:6000 -b
./bin/ipvsadm -au [2001:db8::1]:80 -r 192.168.12.53:6000 -b
./bin/ipvsadm -au [2001:db8::1]:80 -r 192.168.12.54:6000 -b3. SNAT – For internal users without direct Internet access, DPVS can provide a high‑performance SNAT gateway, translating private source addresses to a public IP before reaching external services.
./bin/dpip addr add 192.168.88.1/24 dev dpdk0 # VIP
./bin/dpip addr add 101.227.17.140/25 dev bond1 # WAN IP
./bin/dpip route add default via 101.227.17.254 dev bond1 # default GW
# TCP Rule
./bin/ipvsadm -A -H proto=tcp,src-range=192.168.88.1-192.168.88.253,oif=bond1 -s rr
./bin/ipvsadm -a -H proto=tcp,src-range=192.168.88.1-192.168.88.253,oif=bond1 -r 101.227.17.140:0 -J
# UDP Rule
./bin/ipvsadm -A -H proto=udp,src-range=192.168.88.1-192.168.88.253,oif=bond1 -s rr
./bin/ipvsadm -a -H proto=udp,src-range=192.168.88.1-192.168.88.253,oif=bond1 -r 101.227.17.140:0 -J
# ICMP Rule
./bin/ipvsadm -A -H proto=icmp,src-range=192.168.88.1-192.168.88.253,oif=bond1 -s rr
./bin/ipvsadm -a -H proto=icmp,src-range=192.168.88.1-192.168.88.253,oif=bond1 -r 101.227.17.140:0 -JPerformance and high availability
Since the end of 2018, DPVS has released v1.7 series, adding IPv6‑to‑IPv6 and IPv6‑to‑IPv4 forwarding. Benchmarks show comparable performance between pure IPv4 and pure IPv6 paths, with a modest overhead for IPv6‑to‑IPv4 due to three‑layer conversion.
In iQiYi’s production environment, DPVS runs for over two years, handling thousands of forwarding rules, ~5 TB of daily traffic, and billions of concurrent connections. A typical deployment clusters multiple DPVS instances with equal‑cost multipath routing, each front‑ending several Nginx servers for L7 load balancing, providing high availability and easy horizontal scaling.
Open‑source collaboration
DPVS’s source code is hosted on GitHub with two long‑term branches: master (stable) and devel (development). LTS branches are created for non‑backward‑compatible updates (e.g., DPDK version upgrades). Contributions follow a Git workflow documented in the project’s Contributing guide .
iQIYI Technical Product Team
The technical product team of iQIYI
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.