Boost Linux Network Performance: Practical TCP/IP Stack Tuning Guide
This article presents a comprehensive, step‑by‑step guide for Linux network performance optimization, covering real‑world issues, TCP and IP stack parameter tweaks, queue and interrupt tuning, high‑concurrency scenarios, monitoring scripts, a detailed e‑commerce case study, best‑practice recommendations, and common pitfalls.
Introduction
In high‑concurrency, high‑traffic environments, network performance often becomes the bottleneck. Misconfigured TCP/IP parameters can cause severe latency, TIME_WAIT buildup, and low throughput. This guide shares a complete Linux network performance tuning solution.
Typical Network Performance Problems
High‑concurrency connections : Massive connection spikes during e‑commerce promotions lead to many TIME_WAIT sockets.
Large file transfers : Insufficient network throughput during data backup.
Micro‑service calls : Frequent inter‑service requests cause latency jitter and unstable response times.
These issues usually stem from default kernel TCP/IP settings that cannot meet high‑performance demands.
TCP Stack Core Parameter Optimization
1. TCP Connection Management
# /etc/sysctl.conf configuration
# TCP connection queue length optimization
net.core.somaxconn = 65535 # increase listen queue length
net.core.netdev_max_backlog = 30000 # NIC receive queue length
net.ipv4.tcp_max_syn_backlog = 65535 # SYN queue length
# TIME_WAIT optimization
net.ipv4.tcp_tw_reuse = 1 # allow reuse of TIME_WAIT sockets
net.ipv4.tcp_fin_timeout = 30 # reduce FIN_WAIT_2 time
net.ipv4.tcp_max_tw_buckets = 10000 # limit TIME_WAIT sockets
# Keepalive settings
net.ipv4.tcp_keepalive_time = 600 # start keepalive probes after 10 min
net.ipv4.tcp_keepalive_probes = 3 # number of keepalive probes
net.ipv4.tcp_keepalive_intvl = 15 # interval between probes2. TCP Buffer Optimization
# TCP receive/send buffer optimization
net.core.rmem_default = 262144 # default receive buffer
net.core.rmem_max = 16777216 # max receive buffer
net.core.wmem_default = 262144 # default send buffer
net.core.wmem_max = 16777216 # max send buffer
# Automatic socket buffer scaling
net.ipv4.tcp_rmem = 4096 87380 16777216 # min default max for receive
net.ipv4.tcp_wmem = 4096 65536 16777216 # min default max for send
net.ipv4.tcp_mem = 94500000 915000000 927000000
# Enable TCP window scaling
net.ipv4.tcp_window_scaling = 13. TCP Congestion Control
# Choose congestion control algorithm
net.ipv4.tcp_congestion_control = bbr # recommended BBR algorithm
# alternatives: cubic, reno, bic
# Fast retransmit and recovery
net.ipv4.tcp_frto = 2 # detect false timeouts
net.ipv4.tcp_dsack = 1 # enable DSACK
net.ipv4.tcp_fack = 1 # enable FACK
# Disable slow start after idle
net.ipv4.tcp_slow_start_after_idle = 0IP Stack Parameter Optimization
1. IP Layer Processing
# IP forwarding and routing
net.ipv4.ip_forward = 0 # disable forwarding on non‑router hosts
net.ipv4.conf.default.rp_filter = 1 # enable reverse path filtering
net.ipv4.conf.all.rp_filter = 1
# IP fragmentation thresholds
net.ipv4.ipfrag_high_thresh = 262144 # high threshold
net.ipv4.ipfrag_low_thresh = 196608 # low threshold
net.ipv4.ipfrag_time = 30 # reassembly timeout
# ICMP optimizations
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 12. Port Range Optimization
# Expand local port range
net.ipv4.ip_local_port_range = 1024 65535
# UDP memory settings
net.ipv4.udp_mem = 94500000 915000000 927000000
net.ipv4.udp_rmem_min = 8192
net.ipv4.udp_wmem_min = 8192Network Queue and Interrupt Optimization
1. Device Queue Tuning
# Increase network device processing budget
echo 4096 > /proc/sys/net/core/netdev_budget >> /etc/rc.local
echo 2 > /proc/sys/net/core/netdev_budget_usecs >> /etc/rc.local
# RPS/RFS for multi‑core load balancing
echo f > /sys/class/net/eth0/queues/rx-0/rps_cpus # adjust per CPU core2. Interrupt Balancing Script
#!/bin/bash
# network_irq_balance.sh – balance NIC interrupts across CPUs
IRQ_LIST=$(grep eth0 /proc/interrupts | awk -F: '{print $1}' | xargs)
CPU_COUNT=$(nproc)
i=0
for irq in $IRQ_LIST; do
cpu_mask=$((1 << (i % CPU_COUNT)))
printf "%x" $cpu_mask > /proc/irq/$irq/smp_affinity
echo "IRQ $irq -> CPU $((i % CPU_COUNT))"
((i++))
doneHigh‑Concurrency Scenario Optimizations
1. Large Connection Count
# Increase file descriptor limits
* soft nofile 1048576
* hard nofile 1048576
# Process limits
* soft nproc 1048576
* hard nproc 1048576
# systemd limits
DefaultLimitNOFILE=1048576
DefaultLimitNPROC=10485762. Memory Management
# Reduce swap usage
vm.swappiness = 10
# Dirty page writeback ratios
vm.dirty_ratio = 15
vm.dirty_background_ratio = 5
# Allow memory overcommit
vm.overcommit_memory = 1Performance Monitoring and Validation
1. Key Metric Monitoring Script
#!/bin/bash
# network_monitor.sh – monitor network performance
echo "=== Network Connection Summary ==="
ss -s
echo -e "
=== TCP Connection States ==="
ss -tan | awk 'NR>1{state[$1]++} END{for(i in state) print i, state[i]}'
echo -e "
=== Network Throughput ==="
sar -n DEV 1 1 | grep -E "eth0|Average"
echo -e "
=== Memory Usage ==="
free -h
echo -e "
=== System Load ==="
uptime2. Load Testing Commands
# HTTP load test with wrk
wrk -t12 -c400 -d30s --latency http://your-server-ip/
# Bandwidth test with iperf3
iperf3 -s # server side
iperf3 -c server-ip -t 60 -P 10 # client side
# TCP connection stress test
ab -n 100000 -c 1000 http://your-server-ip/Real‑World E‑Commerce Case Study
After applying the above tuning, the e‑commerce system showed dramatic improvements:
QPS increased from 15 k to 45 k (200 % gain).
Average latency dropped from 120 ms to 35 ms (71 % reduction).
99 % latency reduced from 800 ms to 150 ms (81 % reduction).
Concurrent connections grew from 10 k to 50 k (400 % increase).
CPU usage fell from 85 % to 45 % (47 % reduction).
Key Optimization Points
BBR congestion control : enabled a 40 % throughput boost.
TCP buffer tuning : significantly reduced latency jitter.
Connection reuse : TIME_WAIT sockets decreased by 90 %.
Interrupt balancing : improved multi‑core CPU utilization.
Best‑Practice Recommendations
1. Scenario‑Specific Tuning
High‑concurrency web servers
net.ipv4.tcp_tw_reuse = 1
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535Large‑file transfer servers
net.core.rmem_max = 134217728
net.core.wmem_max = 134217728
net.ipv4.tcp_window_scaling = 1Database servers
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_retries2 = 52. Production Deployment Workflow
Test‑environment validation : apply configuration in a staging environment first.
Canary release : roll out to a subset of servers.
Monitoring observation : watch key metrics closely.
Full rollout : deploy to all servers after verification.
3. Configuration Persistence
# Apply all sysctl settings
sysctl -p
# Verify critical parameters
sysctl net.ipv4.tcp_congestion_control
sysctl net.core.somaxconn
# Make settings survive reboot
echo 'sysctl -p' >> /etc/rc.local
chmod +x /etc/rc.localPrecautions and Common Pitfalls
1. Parameter Tuning Mistakes
Blindly enlarging buffers : may exhaust system memory.
Over‑optimizing TIME_WAIT : can cause port exhaustion.
Ignoring business characteristics : different workloads need different tuning strategies.
2. Rollback Plan
# Backup current sysctl.conf
cp /etc/sysctl.conf /etc/sysctl.conf.backup.$(date +%Y%m%d)
# Quick rollback script
cat > /root/network_rollback.sh <<'EOF'
#!/bin/bash
cp /etc/sysctl.conf.backup.* /etc/sysctl.conf
sysctl -p
echo "Network config rollback completed!"
EOF
chmod +x /root/network_rollback.shConclusion
Systematic TCP/IP stack tuning can dramatically improve Linux server network performance. The key steps are understanding business characteristics, incremental tuning, continuous monitoring, thorough testing, and having a reliable rollback plan.
Liangxu Linux
Liangxu, a self‑taught IT professional now working as a Linux development engineer at a Fortune 500 multinational, shares extensive Linux knowledge—fundamentals, applications, tools, plus Git, databases, Raspberry Pi, etc. (Reply “Linux” to receive essential resources.)
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
