Operations 13 min read

Why Is My Linux Server Dropping Packets? A Step‑by‑Step Deep Dive

This article walks through a systematic investigation of Linux network packet loss, covering potential loss points across the protocol stack, using tools like ethtool, netstat, iptables, tc, hping3, curl, and tcpdump to identify and resolve misconfigurations such as faulty netem rules and an incorrect MTU setting.

Open Source Linux
Open Source Linux
Open Source Linux
Why Is My Linux Server Dropping Packets? A Step‑by‑Step Deep Dive

In‑Depth Analysis of Linux Network Packet Loss

1. Background

Packet loss can occur at any point in the network protocol stack, from VM connections to the application layer.

Transmission failures between two VMs (e.g., congestion, line errors)

Ring buffer overflow after NIC receives packets

Link‑layer issues (frame checksum failures, QoS)

IP‑layer problems (routing failures, MTU exceeding)

Transport‑layer issues (unlistened ports, kernel limits)

Socket‑layer buffer overflow

Application‑layer exceptions

iptables filtering rules can also drop packets

2. Link Layer

When the NIC drops packets due to buffer overflow, Linux records error counters. Use ethtool or netstat -i to view NIC statistics.

netstat -i
Kernel Interface table
Iface   MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0    100   31    0      0      0      0      8    0      0      0   BMRU
lo     65536   0    0      0      0      0      0    0      0      0   0

RX‑OK, RX‑ERR, RX‑DRP, RX‑OVR represent total received packets, total errors, packets dropped after entering the ring buffer, and packets dropped due to ring buffer overflow, respectively. TX‑OK, TX‑ERR, TX‑DRP, TX‑OVR have analogous meanings for transmission.

The above output shows no errors on the virtual NIC, but if tc rules are present, their drops are not reflected here, so we need to check tc configuration.

tc -s qdisc show dev eth0
qdisc netem 800d: root refcnt 2 limit 1000 loss 30%
Sent 432 bytes pkt (dropped 4, overlimits 0, requeues 0)

The netem rule introduces a 30% random packet loss, which explains the observed drops.

3. Network and Transport Layers

Use netstat -s to view protocol‑level statistics and identify errors.

netstat -s
Ip:
  Forwarding: 1
  total packets received: 31
  forwarded: 0
  incoming packets discarded: 0
  incoming packets delivered: 25
Tcp:
  active connection openings: 0
  passive connection openings: 0
  failed connection attempts: 11
  segments retransmitted: 4
  ...
TcpExt:
  failed connection attempts: 11
  resets received for embryonic SYN_RECV sockets: 11
  TCPSynRetrans: 4
  TCPTimeouts: 7

The statistics reveal multiple TCP failures, mainly due to half‑connection resets and retransmissions, indicating handshake problems.

4. iptables

Check connection tracking limits:

sysctl net.netfilter.nf_conntrack_max
net.netfilter.nf_conntrack_max = 262144
sysctl net.netfilter.nf_conntrack_count
net.netfilter.nf_conntrack_count = 182

Since the count is far below the maximum, connection tracking is not the cause.

List filter table rules:

iptables -t filter -nvL
Chain INPUT (policy ACCEPT 25 packets, 1000 bytes)
 pkts bytes target prot opt in out source destination
   6 240 DROP all -- * * 0.0.0.0/0 statistic mode random probability 0.29999999981
Chain OUTPUT (policy ACCEPT 15 packets, 660 bytes)
 pkts bytes target prot opt in out source destination
   6 264 DROP all -- * * 0.0.0.0/0 statistic mode random probability 0.30

Both INPUT and OUTPUT chains contain statistic rules that randomly drop 30% of packets, which matches the observed loss.

Delete the offending rules:

iptables -t filter -D INPUT -m statistic --mode random --probability 0.30 -j DROP
iptables -t filter -D OUTPUT -m statistic --mode random --probability 0.30 -j DROP

5. tcpdump

Capture traffic on port 80: tcpdump -i eth0 -nn port 80 Run a curl request to verify HTTP response: curl --max-time 3 http://192.168.0.30 The request timed out, indicating that HTTP packets were still being dropped.

Inspect NIC counters again:

netstat -i
Kernel Interface table
Iface   MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0    100 157   0      344    0    0    94    0      0      0   BMRU
lo     65536   0   0      0    0    0    0    0      0      0   0

The high RX‑DRP count confirms packet loss at the NIC receive side. The small MTU (100) on eth0 is abnormal; Ethernet defaults to 1500.

Fix the MTU: ifconfig eth0 mtu 1500 After adjusting the MTU, repeat the curl test: curl --max-time 3 http://192.168.0.30/ The command now returns the expected HTML page from Nginx, confirming that the packet‑loss issue is resolved.

NetworkiptablestcpdumpnetstatPacket Losstc
Open Source Linux
Written by

Open Source Linux

Focused on sharing Linux/Unix content, covering fundamentals, system development, network programming, automation/operations, cloud computing, and related professional knowledge.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.