Operations 9 min read

Linux System Parameter and Nginx Configuration Optimization Guide

This guide explains how to improve web service performance by tuning Linux system parameters and Nginx configuration, covering file descriptor limits, TCP connection queues, temporary port ranges, worker processes, keepalive settings, and access‑log buffering, with concrete sysctl and Nginx directives.

360 Tech Engineering
360 Tech Engineering
360 Tech Engineering
Linux System Parameter and Nginx Configuration Optimization Guide

Web service performance tuning is a systematic engineering effort; fixing a single weak link can degrade overall performance, while strengthening a weak link can meet requirements without chasing extreme optimization.

Below are Linux system parameters that require a kernel version 2.6+ (the author used CentOS 7.4, kernel 3.10). Typical adjustments include file descriptor limits, buffer queue lengths, and temporary port ranges.

File Descriptor Limits

Each TCP connection consumes a file descriptor; exhausting them yields “Too many open files”. Modify the system‑wide limits in /etc/sysctl.conf :

fs.file-max = 10000000
fs.nr_open = 10000000

And the user‑level limits in /etc/security/limits.conf :

*hard   nofile      1000000
*soft   nofile      1000000

After editing, apply with:

$ sysctl -p

Verify with ulimit -a .

TCP Connection Queue Length

Edit /etc/sysctl.conf to increase the SYN backlog and accept queue:

# The length of the syn queue
net.ipv4.tcp_max_syn_backlog = 65535
# The length of the tcp accept queue
net.core.somaxconn = 65535

tcp_max_syn_backlog controls the half‑open SYN queue; when full, new SYN packets are dropped, reflected in ListenOverflows and ListenDrops counters. somaxconn sets the full‑connection accept queue; if it fills, clients see “connection reset by peer” and Nginx logs “no live upstreams while connecting to upstreams”.

Temporary Port Range

For Nginx acting as a proxy, each upstream TCP connection consumes a temporary port. Adjust ip_local_port_range in /etc/sysctl.conf :

net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.ip_local_reserved_ports = 8080,8081,9000-9010

ip_local_reserved_ports reserves ports to avoid conflicts with services.

Nginx Parameter Optimization

Worker Processes and Connections

Nginx’s strength lies in its multi‑process, non‑blocking I/O model. Set the number of workers to match CPU cores:

worker_processes auto;

Increase the per‑worker connection limit:

worker_connections 4096;

Select the most efficient I/O multiplexing method for Linux, epoll :

use epoll;

KeepAlive

Enable HTTP/1.1 keep‑alive to reduce connection churn. The keepalive directive defines the maximum idle upstream connections per worker:

upstream BACKEND {
keepalive 300;
server 127.0.0.1:8081;
}
server {
listen 8080;
location / {
proxy_pass http://BACKEND;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}

The official description: the keepalive parameter sets the maximum number of idle keep‑alive connections cached per worker; excess connections are closed.

For a target QPS of 6000 and 200 ms response time, about 1200 long connections are needed; a keepalive value of 10‑30 % of that (e.g., 300) works well.

Access‑Log Buffering

Logging I/O can be costly. Enable buffering to reduce write frequency:

access_log /var/logs/nginx-access.log buffer=64k gzip flush=1m;

buffer defines the size before flushing; flush defines the timeout.

Worker File Descriptor Limit

Mirror the system file‑descriptor limit in Nginx with:

worker_rlimit_nofile 1000000;

Summary

The author’s Nginx tuning experience focuses on addressing major bottlenecks such as file descriptor limits, connection queues, keep‑alive settings, and log buffering. While many more knobs exist, the presented adjustments are sufficient for typical usage scenarios.

PerformanceoperationsLinuxNginxsysctltuning
360 Tech Engineering
Written by

360 Tech Engineering

Official tech channel of 360, building the most professional technology aggregation platform for the brand.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.