Operations 33 min read

How to Diagnose, Clean, and Prevent Docker Log Disk Exhaustion

This guide walks you through identifying which Docker containers are consuming disk space, safely truncating oversized log files, configuring log drivers and rotation policies, setting up centralized logging, and automating cleanup to avoid future disk‑full incidents in production environments.

Ops Community
Ops Community
Ops Community
How to Diagnose, Clean, and Prevent Docker Log Disk Exhaustion

Overview

Disk alerts were triggered by Docker’s default json-file driver, which writes unbounded logs. A Java container running with DEBUG level filled 180 GB of disk, 67 GB of which came from a single log file.

Identification

Check host usage with df -h and Docker usage with docker system df -v. The Containers line shows which container consumes space. In the example, container order-service had a 67 GB log.

Log Inspection

Find large log files:

# du -sh /var/lib/docker/containers/*/ | sort -rh | head -20
# ls -lh $(docker inspect --format='{{.LogPath}}' order-service)

Safe Cleanup

Truncate the log file while keeping the file descriptor:

log_path=$(docker inspect --format='{{.LogPath}}' order-service)
sudo truncate -s 0 "$log_path"

Do NOT delete the file with rm because Docker holds the handle; space is released only after the container or the Docker daemon is restarted.

Automated Cleanup Script

#!/bin/bash
set -euo pipefail
CLEAN_THRESHOLD_MB=500
WARN_THRESHOLD_MB=200
LOG_FILE="/var/log/docker-log-cleaner.log"

log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE"; }

for cid in $(docker ps -aq); do
  name=$(docker inspect --format='{{.Name}}' $cid | sed 's#^/##')
  path=$(docker inspect --format='{{.LogPath}}' $cid)
  [ -f "$path" ] || continue
  size=$(stat -c%s "$path")
  size_mb=$((size/1024/1024))
  if [ $size_mb -ge $CLEAN_THRESHOLD_MB ]; then
    log "Cleaning $name ($size_mb MB)"
    sudo truncate -s 0 "$path"
  elif [ $size_mb -ge $WARN_THRESHOLD_MB ]; then
    log "Warning $name ($size_mb MB)"
  fi
done

Global Log Configuration

Edit /etc/docker/daemon.json and restart Docker:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "50m",
    "max-file": "3",
    "compress": "true",
    "tag": "{{.Name}}"
  }
}

For production, the local driver is recommended because it uses protobuf compression and built‑in rotation.

Per‑Container Overrides

When running a container directly:

docker run -d \
  --name high-log-app \
  --log-driver json-file \
  --log-opt max-size=200m \
  --log-opt max-file=5 \
  my-app:latest

In docker‑compose.yml you can set:

services:
  order-service:
    image: order-service:2.1
    logging:
      driver: json-file
      options:
        max-size: "200m"
        max-file: "5"
        compress: "true"

Log Rotation Details

The json-file driver rotates when the file reaches max-size. The current file is renamed to .1, compressed if compress=true, and older files are removed after max-file limit.

Centralized Logging Options

Loki + Promtail – lightweight, integrates with Grafana, no full‑text search.

EFK (Elasticsearch‑Fluentd‑Kibana) – full‑text search, higher resource usage.

Docker fluentd driver – sends logs directly to a Fluentd collector; docker logs unavailable.

Best Practices

Use standard log levels (FATAL, ERROR, WARN, INFO) and keep DEBUG disabled in production.

Emit structured JSON logs for machine parsing.

Run docker system prune -f regularly to remove stopped containers and unused images.

Restarting a container does not delete its log file; only docker rm or daemon restart frees space.

Monitoring & Alerts

Prometheus rule for disk usage >80%:

groups:
- name: docker-disk-alerts
  rules:
  - alert: DiskSpaceWarning
    expr: (1 - node_filesystem_avail_bytes{mountpoint="/"} / node_filesystem_size_bytes{mountpoint="/"}) * 100 > 80
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Disk usage exceeds 80%"

A custom script can expose container log size as a Prometheus gauge using the node_exporter textfile collector.

Key Commands Cheat Sheet

Urgent truncate:

truncate -s 0 $(docker inspect --format='{{.LogPath}}' <container>)

Set global limits: edit daemon.json and run systemctl restart docker Find biggest containers: docker system df -v Remove stopped containers: docker system prune -f Check a container’s log driver:

docker inspect --format='{{.HostConfig.LogConfig.Type}}' <container>
MonitoringDockerDevOpsLinuxcontainerlog management
Ops Community
Written by

Ops Community

A leading IT operations community where professionals share and grow together.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.