Operations 20 min read

Diagnose Linux Server Performance in 60 Seconds with 10 Essential Commands

When a Linux server suddenly spikes in load, this guide shows how to pinpoint the root cause within a minute by running ten key commands that reveal CPU, memory, disk I/O, and network metrics.

ITPUB
ITPUB
ITPUB
Diagnose Linux Server Performance in 60 Seconds with 10 Essential Commands

Overview

This article presents a rapid, one‑minute workflow for diagnosing Linux performance problems using ten common command‑line tools. By executing these commands you can quickly assess CPU utilization, memory pressure, disk I/O, and network activity, following the USE (Utilization‑Saturation‑Errors) method.

Uptime

The uptime command displays the system’s load averages for the past 1, 5, and 15 minutes, helping you see whether a load spike is transient or sustained.

$ uptime
23:51:26 up 21:31,  1 user,  load average: 30.02, 26.43, 19.02

A high 1‑minute average compared with a low 15‑minute average indicates a recent surge that warrants deeper investigation.

dmesg | tail

Shows the last ten kernel messages, useful for spotting out‑of‑memory kills or network anomalies.

$ dmesg | tail
[1880957.563150] perl invoked oom‑killer: gfp_mask=0x280da, order=0, oom_score_adj=0
[1880957.563400] Out of memory: Kill process 18694 (perl) score 246 or sacrifice child
[1880957.563408] Killed process 18694 (perl) total‑vm:1972392kB, anon‑rss:1953348kB, file‑rss:0kB
[2320864.954447] TCP: Possible SYN flooding on port 7001. Dropping request.  Check SNMP counters.

These logs can reveal kernel‑level failures that contribute to performance degradation.

vmstat 1

Provides per‑second snapshots of processes, memory, swap, I/O, and CPU statistics.

$ vmstat 1
procs ---------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r  b swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
34  0   0 200889792  73708 591828    0    0     0     5   6   10 96  1  3  0  0
...

Key columns:

r : runnable processes waiting for CPU (high value > CPU cores indicates saturation).

free : available memory in KB.

si/so : swap in/out activity.

us, sy, id, wa, st : CPU time spent in user, system, idle, I/O wait, and stolen.

High wa suggests I/O bottlenecks; high us+sy indicates CPU‑bound workloads.

mpstat -P ALL 1

Displays per‑CPU utilization, helping identify a single‑threaded process that monopolizes a core.

$ mpstat -P ALL 1
Linux 3.13.0-49-generic (titanclusters-xxxxx) 07/14/2015 _x86_64_ (32 CPU)
07:38:49 PM  CPU   %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
07:38:50 PM  all   98.47 0.00 0.75 0.00   0.00 0.00   0.00   0.00   0.00   0.78
07:38:50 PM    0   96.04 0.00 2.97 0.00   0.00 0.00   0.00   0.00   0.00   0.99
...

If one CPU shows a markedly higher usage, the culprit is likely a single‑threaded application.

pidstat 1

Shows CPU usage per process, allowing you to spot processes that consume disproportionate CPU cycles.

$ pidstat 1
Linux 3.13.0-49-generic (titanclusters-xxxxx) 07/14/2015 _x86_64_ (32 CPU)
07:41:02 PM UID   PID   %usr %system %guest %CPU CPU Command
07:41:03 PM 0     9     0.00 0.94    0.00   0.94  1  rcuos/0
07:41:03 PM 0   6521 1596.23 1.89   0.00 1598.11 27 java
07:41:03 PM 0   6564 1571.70 7.55   0.00 1579.25 28 java
...

In the example, two Java processes consume roughly 1600 % CPU each, indicating they are using about 16 cores.

iostat -xz 1

Reports detailed disk I/O statistics, including throughput, average request size, queue depth, and device utilization.

$ iostat -xz 1
Linux 3.13.0-49-generic (titanclusters-xxxxx) 07/14/2015 _x86_64_ (32 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
73.96   0.00   3.73   0.03   0.06   22.21
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq‑sz avgqu‑sz await r_await w_await svctm %util
xvda   0.00   0.23   0.21 0.18 4.52 2.08 34.37   0.00   9.98 13.80 5.42 2.44 0.09
...

Important columns:

r/s, w/s, rkB/s, wkB/s : read/write operations and volume.

await : average I/O wait time (ms).

avgqu‑sz : average queue length; values > 1 suggest saturation.

%util : device utilization; > 60 % may impact performance, 100 % means full saturation.

free -m

Shows memory usage in megabytes, distinguishing between used, free, and cached memory.

$ free -m
total   used   free   shared  buffers  cached
Mem:   245998 24545 221453   83      59      541
-/+ buffers/cache: 23944 222053
Swap:  0 0 0

The “‑/+ buffers/cache” line reflects memory actually available to applications, because Linux uses free RAM for caching.

sar -n DEV 1

Monitors network interface throughput, helping determine whether the network is a bottleneck.

$ sar -n DEV 1
Linux 3.13.0-49-generic (titanclusters-xxxxx) 07/14/2015 _x86_64_ (32 CPU)
12:16:48 AM IFACE   rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
12:16:49 AM eth0   18763.00 5032.00 20686.42 478.30 0.00 0.00 0.00 0.00
...

In the sample, eth0 handles ~22 MB/s (≈176 Mbit/s), well below a 1 Gbit/s link capacity.

sar -n TCP,ETCP 1

Shows TCP connection statistics, including active and passive connection rates and retransmissions.

$ sar -n TCP,ETCP 1
Linux 3.13.0-49-generic (titanclusters-xxxxx) 07/14/2015 _x86_64_ (32 CPU)
12:17:19 AM active/s passive/s ise​g/s oseg/s
12:17:20 AM 1.00    0.00    10233.00 18846.00
...

High active/s or retrans/s values can indicate excessive connection churn or network issues.

top

The top utility provides a real‑time snapshot of CPU, memory, and process activity, consolidating information from several of the previous commands.

$ top
top - 00:15:40 up 21:56, 1 user, load average: 31.09, 29.87, 29.92
Tasks: 871 total, 1 running, 868 sleeping, 0 stopped, 2 zombie
%Cpu(s): 96.8 us, 0.4 sy, 0.0 ni, 2.7 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
...

While useful, top shows only an instantaneous view; pausing or logging its output is advisable for deeper analysis.

Conclusion

These ten commands form a quick‑check toolkit for Linux performance troubleshooting. By correlating their outputs you can identify CPU‑bound processes, memory pressure, disk I/O saturation, or network bottlenecks, and then focus optimization efforts on the offending subsystem or application.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

MonitoringperformancediagnosticssysstatCommand-line
ITPUB
Written by

ITPUB

Official ITPUB account sharing technical insights, community news, and exciting events.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.