dperf: A DPDK‑Based 100Gbps Network Performance and Load Testing Tool
dperf is an open‑source DPDK‑based network performance and load testing tool that can generate tens of millions of HTTP connections per second, achieve hundreds of Gbps throughput, and provides detailed statistics, with usage instructions, performance benchmarks, and configuration steps for testing environments.
dperf is a network performance and load testing software built on DPDK, capable of generating up to ten‑million‑plus HTTP connections per second, billions of concurrent requests, and hundreds of Gbps of throughput on a single x86 server.
Advantages
Powerful performance: millions of HTTP CPS, hundreds of Gbps bandwidth, billions of concurrent connections.
Detailed statistics: per‑packet loss, PPS, TCP/Socket/HTTP error counts, retransmissions, etc.
Rich use cases: 4‑layer load‑balancer stress testing, cloud VM network testing, NIC and CPU packet‑processing benchmarking, high‑performance HTTP server/client.
Performance
HTTP Connections per Second (CPS)
Client Cores
Server Cores
HTTP CPS
1
1
2,101,044
2
2
4,000,423
4
4
7,010,743
6
6
10,027,172
HTTP Throughput
Client Cores
Server Cores
RX (Gbps)
TX (Gbps)
Client CPU %
Server CPU %
1
1
18
18
60
59
2
2
35
35
60
59
4
4
46
46
43
43
HTTP Concurrent Connections
Client Cores
Server Cores
Current Connections
Client CPU %
Server CPU %
1
1
100,000,000
34
39
2
2
200,000,000
36
39
4
4
400,000,000
40
41
UDP TX PPS
Client Cores
TX MPPS
Client CPU %
1
15.96
95
2
29.95
95
4
34.92
67
6
35.92
54
8
37.12
22
Test Environment Configuration
Memory: 512 GB (huge pages 100 GB)
NIC: Mellanox MT27710 25 Gbps × 2
Kernel: 4.19.90
Statistics Collected
dperf outputs per‑second statistics such as TPS, CPS, various PPS dimensions, TCP/Socket/HTTP error counts, packet loss, and retransmission counts classified by TCP flags.
seconds 22 cpuUsage 52
pktRx 3,001,058 pktTx 3,001,025 bitsRx 2,272,799,040 bitsTx 1,920,657,600 dropTx 0
arpRx 0 arpTx 0 icmpRx 0 icmpTx 0 otherRx 0 badRx 0
synRx 1,000,345 synTx 1,000,330 finRx 1,000,350 finTx 1,000,350 rstRx 0 rstTx 0
synRt 0 finRt 0 ackRt 0 pushRt 0 tcpDrop 0
skOpen 1,000,330 skClose 1,000,363 skCon 230 skErr 0
httpGet 1,000,345 http2XX 1,000,350 httpErr 0
ierrors 0 oerrors 0 imissed 0Getting Started
Configure Huge Pages
# Edit '/boot/grub2/grub.cfg' and add:
linux16 /vmlinuz-... nopku transparent_hugepage=never default_hugepagesz=1G hugepagesz=1G hugepages=8
# Reboot the OSCompile DPDK
# Enable the required PMD in 'config/common_base'
# Example for Mellanox CX4/CX5:
# CONFIG_RTE_LIBRTE_MLX5_PMD=y
TARGET=x86_64-native-linuxapp-gcc # or arm64-armv8a-linuxapp-gcc
cd /root/dpdk/dpdk-stable-19.11.10
make install T=$TARGET -j16Compile dperf
cd dperf
make -j8 RTE_SDK=/root/dpdk/dpdk-stable-19.11.10 RTE_TARGET=$TARGETBind NIC
# Skip for Mellanox NICs
modprobe uio
modprobe uio_pci_generic
/root/dpdk/dpdk-stable-19.11.10/usertools/dpdk-devbind.py -b uio_pci_generic 0000:1b:00.0Start dperf Server
# Listen on 6.6.241.27:80, gateway 6.6.241.1
./build/dperf -c test/http/server-cps.confSend Requests from Client
# Ensure client IP is within the 'client' range in the config
ping 6.6.241.27
curl http://6.6.241.27/Run a Test
On the client machine, execute:
./build/dperf -c test/http/client-cps.confOpen‑Source Repository
https://github.com/baidu/dperf
Architect's Guide
Dedicated to sharing programmer-architect skills—Java backend, system, microservice, and distributed architectures—to help you become a senior architect.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.