Operations 7 min read

Why Deleting Files Doesn’t Free Disk Space on Linux—and How to Recover It

This guide explains why removing large log files on Linux often leaves disk usage unchanged, reveals the hidden “ghost” processes that keep deleted files open, and provides step‑by‑step commands to identify, release, and prevent such space leaks.

Xiao Liu Lab
Xiao Liu Lab
Xiao Liu Lab
Why Deleting Files Doesn’t Free Disk Space on Linux—and How to Recover It
Have you ever encountered this weird situation? df -h shows disk 95% used, du -sh /var/log only 10GB, rm -f catalina.out deleted a 50GB log, yet the space didn’t come back!
Don't panic!

This is not a disk failure nor a filesystem bug; it’s a process that is still "stealthily" holding the deleted file.

Today, I’ll walk you through locating the “ghost process”, freeing space, and permanently fixing it!

Why deleting a file doesn't free space?

In Linux, deleting a file (unlink) only removes the directory entry. As long as any process still holds an open file descriptor, the disk space remains allocated until the process closes the file or exits.

Key mechanism:

File is removed from directory (unlink)
As long as a process has the file open (holds a file descriptor),
Disk space is not released!
Space is reclaimed only after the process closes the file or terminates.

Plain explanation:

It’s like borrowing a book; the library removes it from the shelf (rm), but as long as you haven’t returned it (process still writing), the book cannot be used by the next person.

Practical: 3 steps to locate the “occupying the toilet without flushing” process

Step 1: Confirm existence of “deleted but not released” files

List files opened by processes that have been deleted (focus on “deleted”)

lsof +L1

Correct output example:

COMMAND   PID USER   FD   TYPE DEVICE SIZE/OFF    NODE NAME
java     1234 root    1w   REG  253,0 52428800 1234567 /app/logs/catalina.out (deleted)
nginx    5678 www-data 5w REG 253,0 1073741824 7654321 /var/log/nginx/access.log (deleted)

Key fields:

PID: culprit process ID
SIZE/OFF: space occupied (bytes)
(deleted): file marked as deleted but not released

Step 2: Inspect process details

Show process information

ps -fp 1234

List all files opened by the process (verification)

lsof -p 1234 | grep deleted

Step 3: Release space (two safe options)

Option 1: Graceful restart (recommended)

# Example for Tomcat
./tomcat/bin/shutdown.sh && ./tomcat/bin/startup.sh

Option 2: Reload (e.g., Nginx)

nginx -s reload
✅ Advantage: service continues without interruption (suitable for Nginx/Logrotate scenarios)

Option 2: Force kill (use with caution)

kill -9 1234
❗ Only for stateless processes (temporary scripts, runaway Java processes)

Never use on databases or core services!

How to prevent? Three golden rules

1. Log rotation must use logrotate with copytruncate or reload

2. Java applications: avoid redirecting logs directly to a file

java -jar app.jar > app.log 2>&1 &  # High risk!

3. Monitor “deleted but still held” files

# Prometheus + node_exporter already supports
node_filesystem_files_deleted
# or custom alert script:
if lsof +L1 | grep -q deleted; then
  echo "ALERT: Deleted files still held by processes!" | mail -s "Disk Leak" [email protected]
fi

Quick self‑check checklist

Symptom: df and du results differ greatly — Cause: process holds deleted file — Solution: locate with lsof +L1

Symptom: Disk remains full after log deletion — Cause: application hasn't closed file descriptor — Solution: restart or reload process

Symptom: Space suddenly frees — Cause: process unexpectedly exited — Solution: check for abnormal crashes

Final note

In Linux, a file’s lifetime is dictated by processes.

Deletion is only the beginning; releasing the space is the goal.

Master lsof +L1 and you’ll see through the “disappearing” disk mystery at a glance.

Process ManagementLinuxSystem Administrationdisk spacelog rotationlsofDeleted Files
Xiao Liu Lab
Written by

Xiao Liu Lab

An operations lab passionate about server tinkering 🔬 Sharing automation scripts, high-availability architecture, alert optimization, and incident reviews. Using technology to reduce overtime and experience to avoid major pitfalls. Follow me for easier, more reliable operations!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.