Operations 8 min read

Diagnosing 100% Disk Usage and Cleaning Docker Logstash Container Logs

This article explains how to identify the cause of a full disk on a Linux server by using df and du commands, discovers that oversized Logstash container logs are the culprit, and presents three practical solutions—including manual cleanup, a periodic script, and Docker log size limits—to reclaim space.

Wukong Talks Architecture
Wukong Talks Architecture
Wukong Talks Architecture
Diagnosing 100% Disk Usage and Cleaning Docker Logstash Container Logs

Hello, I am Wukong.

1. Investigating 100% Disk Usage

1.1 View overall disk usage

The first command is df -h , which shows the file system usage on a Linux system. The output columns are:

Filesystem : name of the file system

Size : total size

Used : used space

Avail : available space

Use% : usage percentage (100% means full)

Mounted On : mount point

From the result we see that /dev/sda2 mounted on / is at 100% usage.

1.2 Find large files in the directory

Use the du command to display disk usage of directories or files.

# 先进入到根目录 `/`
cd /
# 列出当前目录或文件的总大小,并按倒叙排序
du -sh /* | sort -nr

Identify the biggest directory, e.g., var , which occupies over 100 GB, then drill down:

du -sh /var/* | sort -nr

Eventually we discover that Logstash container log files consume several tens of gigabytes.

1.3 Why Logstash container logs are huge

Inspect the last 100 lines of a container log with tail or docker logs :

tail -n 100 <container_id>-json.log
# or
docker logs --tail=100 159

The logs contain extensive parsing output from Logstash, and because the backend services generate many logs, Logstash continuously writes its own parsed logs, eventually filling the disk.

2. Container Log Cleanup Solutions

Solution 1: Manual cleanup – quick but temporary.

Solution 2: Scripted periodic cleanup – removes logs but loses history.

Solution 3: Limit Docker container log size – permanent fix requiring container recreation.

2.1 Solution 1 – Manual Cleanup

cat /dev/null > /var/lib/docker/containers/
/
-json.log

Do not use rm because the file may still be held open; instead truncate it with cat /dev/null > … . After truncation, df -h will show reclaimed space.

2.2 Solution 2 – Scripted Periodic Cleanup

Provide a cleanup script:

#!/bin/sh

echo "======== start clean docker containers logs ========"
logs=$(find /var/lib/docker/containers/ -name "*-json.log")
for log in $logs; do
    echo "clean logs : $log"
    cat /dev/null > $log
done
echo "======== end clean docker containers logs ========"

Make it executable and run it:

chmod +x clean_docker_log.sh
./clean_docker_log.sh

Schedule it via a Linux cron job.

2.3 Solution 3 – Limit Docker Container Log Size

Create or edit /etc/docker/daemon.json with:

{
  "log-driver": "json-file",
  "log-opts": {"max-size": "500m", "max-file": "3"}
}

This caps each container log file at 500 MB and keeps up to three rotated files. Restart Docker to apply:

systemctl daemon-reload
systemctl restart docker

Note: the limit only affects newly created containers; existing ones need to be recreated.

References:

https://www.cnblogs.com/gcgc/p/10521005.html

Linux df command: https://www.runoob.com/linux/linux-comm-df.html

Linux du command: https://www.runoob.com/linux/linux-comm-du.html

DockerOperationsLinuxLogstashdisk usageLog Cleanup
Wukong Talks Architecture
Written by

Wukong Talks Architecture

Explaining distributed systems and architecture through stories. Author of the "JVM Performance Tuning in Practice" column, open-source author of "Spring Cloud in Practice PassJava", and independently developed a PMP practice quiz mini-program.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.