Operations 11 min read

Boost Linux Network Performance: Proven Tips to Increase Bandwidth & Reduce Latency

This article provides a comprehensive guide to Linux network performance tuning, covering key metrics, practical commands for adjusting TCP parameters, congestion control, kernel optimizations, hardware choices, zero‑copy techniques, load balancing, and essential monitoring tools to achieve higher bandwidth and lower latency.

Raymond Ops
Raymond Ops
Raymond Ops
Boost Linux Network Performance: Proven Tips to Increase Bandwidth & Reduce Latency

Key Metrics for Linux Network Performance

Bandwidth : Amount of data transferred per unit time, measured in bps; insufficient bandwidth slows overall system performance.

Latency : Time for data to travel from source to destination; high latency degrades user experience, especially for real‑time apps.

Packet Loss : Ratio of lost packets during transmission; causes retransmissions, increasing latency and reducing effective bandwidth.

Throughput : Actual useful data transferred, usually lower than theoretical bandwidth due to congestion and hardware limits.

Techniques to Increase Bandwidth

1. Adjust TCP Window Size

The TCP window controls flow; larger windows raise throughput. View current values:

sysctl net.ipv4.tcp_rmem
sysctl net.ipv4.tcp_wmem

Set larger minimum, default, and maximum values (e.g., 6 MiB max):

sysctl -w net.ipv4.tcp_rmem="4096 87380 6291456"
sysctl -w net.ipv4.tcp_wmem="4096 65536 6291456"

2. Enable TCP Fast Open

TCP Fast Open reduces connection‑setup delay by allowing data transmission during the three‑way handshake.

sysctl -w net.ipv4.tcp_fastopen=3

3. Change TCP Congestion Control Algorithm

Check the current algorithm: sysctl net.ipv4.tcp_congestion_control If it is cubic or reno, switch to BBR for better bandwidth utilization on high‑speed links:

sysctl -w net.ipv4.tcp_congestion_control=bbr

Techniques to Reduce Latency

1. Optimize Kernel Parameters

Increase TCP buffers and adjust timeout/reuse settings to shorten wait times:

sysctl -w net.ipv4.tcp_rmem="4096 87380 6291456"
sysctl -w net.ipv4.tcp_wmem="4096 65536 6291456"
sysctl -w net.ipv4.tcp_fin_timeout=30
sysctl -w net.ipv4.tcp_tw_reuse=1

2. Use Efficient NIC and Drivers

Enable irqbalance to reduce interrupt latency and verify driver information:

ethtool -i eth0

3. Apply Zero‑Copy Techniques

Use the sendfile() system call to transfer data directly from user space to the network interface, bypassing kernel buffers:

sendfile(socket, file_descriptor, NULL, file_size);

4. High‑Precision Clock Synchronization

Synchronize server clocks with NTP to avoid time‑drift‑induced latency:

ntpq -p

Network Load Balancing and Optimization

1. Multi‑NIC Bonding

Combine multiple NICs into a single logical interface using ifenslave: ifenslave bond0 eth0 eth1 Configure appropriate bonding mode in /etc/network/interfaces to improve bandwidth and reduce latency.

2. Optimize TCP Connection Limits

Increase the maximum number of pending connections for high‑concurrency workloads:

sysctl -w net.core.somaxconn=65535
sysctl -w net.ipv4.tcp_max_syn_backlog=65535

Monitoring and Analysis Tools

iftop

and nload – real‑time bandwidth usage per interface. netstat and ss – display socket states and listening ports. iperf – measure bandwidth, latency, and packet loss between hosts.

Conclusion

Linux network performance tuning involves adjusting kernel parameters, selecting optimal TCP algorithms, leveraging hardware features, and employing proper monitoring tools. By applying these practices, administrators can substantially increase bandwidth, lower latency, and deliver more reliable, high‑performance services in both traditional data centers and cloud environments.

LatencyLinuxnetwork performanceTCP Tuning
Raymond Ops
Written by

Raymond Ops

Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.