Operations 13 min read

How to Diagnose and Fix Linux Network Latency with hping3, wrk, and Wireshark

This article explains how to identify the root causes of Linux network latency—such as slow transmission, kernel processing, and application delays—by using tools like ping, traceroute, hping3, wrk, tcpdump, and Wireshark, and demonstrates practical testing with Nginx containers to analyze and mitigate latency issues.

Efficient Ops
Efficient Ops
Efficient Ops
How to Diagnose and Fix Linux Network Latency with hping3, wrk, and Wireshark

Linux Network Latency

Network latency (RTT) is the round‑trip time for a packet to travel from source to destination and back. Application latency adds the processing time of the request and response.

The

ping

command measures RTT using ICMP, but many services disable ICMP. In that case

traceroute

or

hping3

in TCP/UDP mode can be used.

<code># hping3 -c 3 -S -p 80 google.com
HPING google.com (eth0 142.250.64.110): S set, 40 headers + 0 data bytes
len=46 ip=142.250.64.110 ttl=51 id=47908 sport=80 flags=SA seq=0 win=8192 rtt=9.3 ms
...
3 packets transmitted, 3 packets received, 0% packet loss
round‑trip min/avg/max = 9.3/10.9/11.9 ms
</code>

Similarly,

traceroute --tcp -p 80 -n google.com

sends three TCP packets per hop and reports the RTT for each hop.

Case Study

Two hosts are used: host1 (192.168.0.30) runs two Nginx containers—one standard and one with artificial latency; host2 (192.168.0.2) acts as the analysis machine.

host1 setup

<code># Official nginx
docker run --network=host --name=good -itd nginx
# Latency version of nginx
docker run --name nginx --network=host -itd feisky/nginx:latency
</code>

Verify both containers serve traffic:

<code>$ curl http://127.0.0.1
<!DOCTYPE html>...
$ curl http://127.0.0.1:8080
...</code>

Latency measurement with hping3

<code># Port 80
hping3 -c 3 -S -p 80 192.168.0.30
# Port 8080
hping3 -c 3 -S -p 8080 192.168.0.30
</code>

Concurrent load testing with wrk

<code># 80 port
wrk --latency -c 100 -t 2 --timeout 2 http://192.168.0.30/
# 8080 port
wrk --latency -c 100 -t 2 --timeout 2 http://192.168.0.30:8080/
</code>

The standard Nginx (port 80) shows an average latency of ~9 ms, while the latency‑modified Nginx (port 8080) averages ~44 ms, with 50 % of requests taking more than 44 ms.

Packet Capture and Analysis

On host1, capture traffic on port 8080:

<code>tcpdump -nn tcp port 8080 -w nginx.pcap
</code>

Open the

nginx.pcap

file in Wireshark on host2. Use “Follow → TCP Stream” to isolate a connection, then view “Statistics → Flow Graph” filtered to TCP flows. The graph shows that the second HTTP request experiences a ~40 ms delay caused by the TCP delayed‑ACK timer.

The delay is due to the TCP delayed‑ACK mechanism, which waits up to 40 ms before sending an ACK in hopes of piggybacking on outgoing data. The client (wrk) does not enable

TCP_QUICKACK

, so the delayed ACK is observed.

<code>strace -f wrk --latency -c 100 -t 2 --timeout 2 http://192.168.0.30:8080/
... setsockopt(52, SOL_TCP, TCP_NODELAY, [1], 4) = 0
</code>

Since only

TCP_NODELAY

is set, the default delayed‑ACK behavior remains, explaining the higher latency of the test Nginx.

Conclusion

Use

hping3

and

wrk

to verify single‑request and concurrent request latency.

Use

traceroute

to check routing and per‑hop delays.

Capture traffic with

tcpdump

and analyze with Wireshark to spot protocol‑level issues such as delayed ACKs.

Inspect socket options with

strace

to ensure appropriate TCP settings.

performance testingLinuxWiresharknetwork latencywrkhping3
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.