Fundamentals 28 min read

Why Your Disk Shows Free Space but Files Won’t Write: Mastering Inodes

The article explains how inode exhaustion on Linux filesystems can cause "No space left on device" errors despite available disk space, details inode structure and allocation, provides step‑by‑step diagnostics, monitoring scripts, best‑practice recommendations, and recovery procedures to prevent and resolve inode‑related issues.

MaGe Linux Operations
MaGe Linux Operations
MaGe Linux Operations
Why Your Disk Shows Free Space but Files Won’t Write: Mastering Inodes

Overview

In Linux filesystems each file and directory consumes an inode , a data structure that stores metadata (permissions, owner, timestamps) and pointers to data blocks. The total number of inodes is fixed at filesystem creation (ext4, XFS, Btrfs). When many small files are created, inode exhaustion can occur before the actual disk space is full, resulting in the error No space left on device even though df -h shows low usage.

Key technical characteristics

Metadata separation : Inodes hold file metadata, data blocks store file contents.

Fixed inode count : Determined at format time for ext4 and many other filesystems; cannot be expanded later.

Direct + indirect indexing : ext4 uses 12 direct pointers and three levels of indirect pointers, allowing files up to several terabytes.

Hard‑link mechanism : Multiple filenames can reference the same inode, saving space.

Preparation

Check filesystem type and inode usage:

# df -T
# df -i
# find /var/log -type f | wc -l   # count files in a directory

Install required tools (e.g., e2fsprogs, xfsprogs, sysstat) and verify inode support:

# Ubuntu/Debian
sudo apt update && sudo apt install -y e2fsprogs xfsprogs sysstat
# CentOS/RHEL
sudo yum install -y e2fsprogs xfsprogs sysstat
# Verify
tune2fs -l /dev/sda1 | grep -i inode

Inode structure (ext4)

The on‑disk inode definition (simplified) is:

struct ext4_inode {
  __le16 i_mode;      // file type and permissions
  __le16 i_uid;       // owner UID (low 16 bits)
  __le32 i_size_lo;   // lower 32 bits of size
  __le32 i_atime;     // access time
  __le32 i_ctime;     // status change time
  __le32 i_mtime;     // modification time
  __le32 i_dtime;     // deletion time
  __le16 i_gid;       // group GID (low 16 bits)
  __le16 i_links_count; // hard‑link count
  __le32 i_blocks_lo; // block count (low 32 bits)
  __le32 i_flags;     // file flags
  __le32 i_block[15]; // block pointers (direct + indirect)
  // ... more fields
};

The i_block[15] array provides 12 direct pointers (48 KB total), a single‑indirect block (~4 MB), a double‑indirect block (~4 GB) and a triple‑indirect block (~4 TB). The theoretical maximum file size is about 4 TB; ext4 can support up to 16 TB using extents.

Inode allocation

Default allocation on ext4 creates one inode per 16 KB of space. Example for a 100 GB disk:

# 100 GB → 100*1024*1024 KB / 16 KB ≈ 6 553 600 inodes

Custom allocation can be set during formatting:

# More inodes for many small files (4 KB per inode)
sudo mkfs.ext4 -i 4096 /dev/sdb1
# Fewer inodes for large files (64 KB per inode)
sudo mkfs.ext4 -i 65536 /dev/sdb1
# Directly specify inode count
sudo mkfs.ext4 -N 10000000 /dev/sdb1

Viewing and counting inodes

# Show inode number of a file
ls -i /etc/passwd
# Detailed inode info
stat /etc/passwd
# Count files (inode consumption) in a directory
find /var/log -type f | wc -l
# List top directories by inode usage
sudo du --inodes -S /var | sort -rh | head -10

Fault diagnosis

Simulate inode exhaustion

# Create a 100 MB loop file with 4 KB per inode (~25 600 inodes)
sudo dd if=/dev/zero of=/tmp/disk.img bs=1M count=100
sudo mkfs.ext4 -i 4096 /tmp/disk.img
sudo mkdir -p /mnt/test && sudo mount -o loop /tmp/disk.img /mnt/test
# Fill inodes quickly
cd /mnt/test
for i in {1..30000}; do
  touch file_$i 2>/dev/null || { echo "Inode exhausted after $i files"; break; }
 done
# Verify inode usage
df -i /mnt/test
# Attempt new file (fails)
touch /mnt/test/newfile
# Disk space still low usage
df -h /mnt/test
# Cleanup
sudo umount /mnt/test && rm /tmp/disk.img

Production‑environment investigation

# Confirm inode exhaustion
df -i
# Locate directories consuming most inodes
sudo find / -xdev -type f | cut -d '/' -f2 | sort | uniq -c | sort -rn | head -10
# Drill into a specific directory (e.g., /var/log)
sudo find /var/log -type f -printf "%i
" | sort -u | wc -l
# Identify processes holding deleted files
sudo lsof +L1

Monitoring scripts

Inode monitoring script (inode_monitor.sh)

#!/bin/bash
# inode_monitor.sh – monitor inode usage and alert
THRESHOLD=80
WEBHOOK_URL="https://example.com/webhook"

df -i | tail -n +2 | while read filesystem inodes iused ifree iuse_percent mountpoint; do
  iuse=${iuse_percent%\%}
  if [ "$iuse" -ge "$THRESHOLD" ]; then
    message="Inode alert
Mount: $mountpoint
Usage: $iuse_percent
Used: $iused
Free: $ifree"
    curl -X POST "$WEBHOOK_URL" -H 'Content-Type: application/json' -d "{\"msgtype\":\"text\",\"text\":{\"content\":\"$message\"}}"
    echo "$(date) - $mountpoint inode $iuse_percent" >> /var/log/inode_alerts.log
    echo "Top 10 inode‑consuming directories:" >> /var/log/inode_alerts.log
    sudo find $mountpoint -xdev -type d -exec sh -c 'echo $(find "{}" -maxdepth 1 | wc -l) "{}"' \; 2>/dev/null | sort -rn | head -10 >> /var/log/inode_alerts.log
  fi
done

Schedule via crontab (every 5 minutes):

*/5 * * * * /opt/scripts/inode_monitor.sh

Inode cleanup script (cleanup_old_files.sh)

#!/bin/bash
LOG_DIR="/var/log/app"
RETENTION_DAYS=7

before=$(find $LOG_DIR -type f | wc -l)
find $LOG_DIR -type f -mtime +$RETENTION_DAYS -delete
after=$(find $LOG_DIR -type f | wc -l)
freed=$((before - after))

echo "Cleaned $freed files (before: $before, after: $after)" >> /var/log/cleanup.log

Prometheus / node_exporter configuration

# /etc/prometheus/prometheus.yml (excerpt)
scrape_configs:
- job_name: 'node_exporter'
  static_configs:
  - targets: ['localhost:9100']

# Query inode usage percentage
(node_filesystem_files - node_filesystem_files_free) / node_filesystem_files * 100

# Alert rules (high usage > 80 %)
- alert: InodeUsageHigh
  expr: (node_filesystem_files - node_filesystem_files_free) / node_filesystem_files * 100 > 80
  for: 10m
  labels:
    severity: warning
  annotations:
    summary: "Inode usage high on {{ $labels.mountpoint }}"
    description: "Current usage {{ $value }}%"
- alert: InodeCritical
  expr: node_filesystem_files_free < 10000
  for: 5m
  labels:
    severity: critical
  annotations:
    summary: "Inode critically low on {{ $labels.mountpoint }}"
    description: "Only {{ $value }} inodes left"

Best practices and caveats

Capacity planning : Choose inode density based on workload. Example commands:

# Small‑file workloads (e.g., logs, mail)
sudo mkfs.ext4 -i 8192 /dev/sdb1
# Large‑file workloads (e.g., videos, databases)
sudo mkfs.ext4 -i 65536 /dev/sdb1
# Mixed workloads – use default 16 KB
sudo mkfs.ext4 /dev/sdb1

Allocate 1.5‑2× the expected file count and monitor growth trends.

File management : Regularly clean temporary directories ( systemd-tmpfiles --clean), use logrotate to compress and delete old logs, and employ hard links for duplicate content.

Filesystem selection :

XFS : Dynamic inode allocation, suitable for large‑file workloads; avoid > 80 % usage for performance.

Btrfs : Copy‑on‑write, snapshots, good for container environments.

ext4 : General‑purpose, stable, but inode count fixed at format time.

Configuration notes :

Inode count cannot be changed after formatting ext4/xfs; re‑format required.

On XFS, inode usage above 80 % may degrade performance.

Hard‑linked files keep their inode allocated until all links are removed.

Tmpfs also has inode limits; adjust with mount -o size=1G,nr_inodes=10k -t tmpfs tmpfs /mnt/ram.

Common error scenarios

No space left on device – caused by inode exhaustion or full disk; check with df -i and clean files.

Too many links – hard‑link count exceeds filesystem limit (ext4 ~65 000); reduce links.

High inode usage but no visible files – deleted files still held by processes; find with lsof +L1 and restart processes.

Unexpected inode count after format – block size differs from default; verify with mkfs -b options.

Diagnostic commands

# Kernel messages about inode issues
sudo dmesg | grep -i "ext4\|xfs\|inode"
# System journal
sudo journalctl -k | grep -i inode
# Audit recent file creations
sudo ausearch -m CREATE -ts recent | grep -i inode
# Find directories with rapid inode growth (last day)
sudo find / -xdev -type f -mtime -1 | cut -d '/' -f2 | sort | uniq -c | sort -rn
# Identify processes holding many files
sudo lsof | awk '{print $2}' | sort -rn | head -10

Performance monitoring

# Real‑time inode usage
watch -n 5 'df -i'
# File creation rate (inotify)
inotifywait -m -r /var/log -e create
# sar filesystem activity
sar -F 1 10

Backup and recovery

Regularly back up inode statistics and superblock information:

# Save inode usage
df -i > /data/backups/filesystem/inode_usage_$(date +%Y%m%d).txt
# Save superblock info
sudo tune2fs -l /dev/sda1 > /data/backups/filesystem/superblock_$(date +%Y%m%d).txt
# Save directory tree (metadata only)
sudo find / -xdev > /data/backups/filesystem/file_tree_$(date +%Y%m%d).txt
# Save inode‑to‑path map
sudo find / -xdev -printf "%i %p
" > /data/backups/filesystem/inode_map_$(date +%Y%m%d).txt

Recovery after accidental mass deletion:

Stop affected services.

Run filesystem check (e.g., fsck.ext4 -f /dev/sda1) after unmounting.

Use extundelete or testdisk to restore files.

Verify inode usage with df -i and file counts.

Advanced learning directions

Deep dive into filesystem internals: ext4 extents, XFS B‑trees, Btrfs copy‑on‑write.

Explore high‑performance filesystems such as F2FS and ZFS.

Study object‑storage solutions (MinIO, Ceph) for massive small‑file workloads.

References

Ext4 Filesystem Design – official documentation.

Linux Filesystem Internals – kernel documentation.

Understanding the Linux Kernel – O’Reilly.

XFS Filesystem Management – Red Hat.

Filesystem performance comparisons – Phoronix.

MonitoringLinuxTroubleshootinginodeFilesystemshell scriptdisk space
MaGe Linux Operations
Written by

MaGe Linux Operations

Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.