How to Optimize Server Performance: Config, Load Analysis, and Kernel Tuning
Learn practical methods to boost server performance by selecting appropriate hardware configurations, analyzing CPU, memory, disk I/O and network loads, and fine‑tuning kernel parameters such as file limits and TCP settings, with step‑by‑step commands and monitoring tools like htop, iostat, and nload.
Server performance determines the upper limit of software performance, making server optimization essential. This guide covers three aspects: server configuration selection, server load analysis, and kernel parameter tuning.
Server Configuration Selection
A server consists of CPU, memory, disk, and network card. Choosing a configuration means deciding on CPU cores, memory size, disk capacity and type, and bandwidth. Because performance depends on the software implementation, there is no universal configuration for a target throughput such as 1000 TPS.
Configuration should be based on test results. Start with a lower‑spec server, tune and test, then use the results to guide the final choice.
Example: an order service tested on a 4‑core CPU, 16 GB RAM, 10 Mbps bandwidth, 50 GB HDD server achieved 50 concurrent users and 300 TPS. CPU usage was ~75 %, memory < 50 %, bandwidth < 50 %.
Thus a server with 4‑core CPU (CPU usage < 75 %), 8 GB RAM (memory usage near 100 %), and 5 Mbps bandwidth (bandwidth usage near 100 %) can handle the same load.
To reach 200 concurrent users and 2400 TPS, either eight 4‑core/8 GB/5 Mbps servers or one 32‑core/64 GB/40 Mbps server are needed, but testing is still required.
Note: Backend and database servers should be tuned together to avoid mismatched capacities. The method may not suit every scenario; use it as a reference.
Server Load Analysis
CPU Usage
CPU usage reflects how busy the processor is. When it reaches 100 %, processes wait. In practice, keep CPU usage below 75 % to handle spikes; exceed this and consider adding servers.
Use htop to monitor CPU, memory, and load.
Using htop to view CPU load
Install htop on CentOS:
<code>yum install htop -y</code>Run the tool:
<code>htop</code>The display shows per‑core usage; when all cores exceed 75 % the server is considered overloaded.
The screenshot shows a 4‑core server where three cores are above 75 % and all hover around 85 %, indicating high load.
Memory Usage
Memory usage indicates how much RAM is occupied. When physical memory reaches 100 %, the system swaps to disk, which is much slower.
Keep physical memory usage below 80 % and avoid using swap.
Monitor memory with htop as well.
The example shows 16 GB total memory with ~10 GB used (62 % usage) and swap disabled.
Disk I/O
Disk I/O represents read/write activity, often a bottleneck due to logging, file operations, and especially database access.
Using iostat to view Disk I/O
Install iostat (part of sysstat) on CentOS:
<code>yum install sysstat -y</code>Run the tool:
<code>iostat -x 1</code>Key metrics are
%idle(idle time excluding I/O, should be > 70 %) and
%util(percentage of time the device is busy, should be < 70 %).
Average Load
Average load is the average number of active processes over time; it should be less than the number of CPU cores. Monitoring can also be done with htop.
Typically keep load below 75 % of the core count; the example server exceeds this, indicating a need for performance improvement.
Network Usage
Network bandwidth affects response time; ensure usage stays below 80 % to handle spikes. Physical NIC limits the maximum bandwidth.
Use nload to monitor inbound and outbound traffic.
Using nload to view Network
Install nload on CentOS:
<code>yum install nload -y</code>Run the tool:
<code>nload</code>The display shows current, average, min, max speeds and total traffic. When current speed approaches the maximum, bandwidth is near full utilization.
Server Kernel Parameter Tuning
Beyond hardware, tuning kernel parameters is essential for high‑concurrency workloads. Typical targets are front‑end, back‑end, and database servers.
Maximum Open Files per Process
Edit
/etc/security/limits.confand add:
<code>* soft nofile 65535
* hard nofile 65535
* soft nproc 65535
* hard nproc 65535</code>The asterisk applies to all users; the values take effect after a reboot.
TCP Settings
Modify
/etc/sysctl.confto improve TCP performance under high load:
<code># Disable SYN flood protection for high‑concurrency systems
net.ipv4.tcp_syncookies = 0
# Reuse TIME‑WAIT sockets for new connections
net.ipv4.tcp_tw_reuse = 1
# Fast recycle of TIME‑WAIT sockets
net.ipv4.tcp_tw_recycle = 1
# Reduce FIN‑WAIT‑2 timeout (seconds)
net.ipv4.tcp_fin_timeout = 30
# Keepalive probe interval (seconds)
net.ipv4.tcp_keepalive_time = 1200
# Local port range for outbound connections
net.ipv4.ip_local_port_range = 10246 3535
# SYN backlog size
net.ipv4.tcp_max_syn_backlog = 65535
# Max TIME_WAIT sockets
net.ipv4.tcp_max_tw_buckets = 5000
# Max packets queued to the network interface
net.core.netdev_max_backlog = 65535
# Max TCP connections backlog
net.core.somaxconn = 65535
# Default and max receive buffer sizes
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
# Default and max send buffer sizes
net.core.wmem_default = 8388608
net.core.wmem_max = 16777216
# Disable TCP timestamps
net.ipv4.tcp_timestamps = 0
# Max orphaned TCP sockets to mitigate DoS attacks
net.ipv4.tcp_max_orphans = 3276800</code>Apply changes with
sysctl -pand restart services as needed.
macrozheng
Dedicated to Java tech sharing and dissecting top open-source projects. Topics include Spring Boot, Spring Cloud, Docker, Kubernetes and more. Author’s GitHub project “mall” has 50K+ stars.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.