Operations 10 min read

Collecting and Processing Docker Logs with ELK: Installation and Configuration Guide

This article explains the challenges of Docker log collection and provides a step‑by‑step guide for installing ELK components, configuring Logstash, Kibana, and various log shippers such as Filebeat, logging drivers, Logspout, and Logz.io to reliably gather and visualize container logs.

DevOps
DevOps
DevOps
Collecting and Processing Docker Logs with ELK: Installation and Configuration Guide

Docker containers generate logs that are transient, distributed, and isolated, making log collection a complex task that requires a robust solution.

ELK (Elasticsearch, Logstash, Kibana) is a popular stack for handling container logs, though setting up the workflow can be challenging; once configured, Kibana dashboards can visualize Docker logs.

The typical ELK log‑collection flow for Dockerized environments involves Logstash pulling logs from containers or hosts, parsing them with filters, forwarding to Elasticsearch for indexing, and visualizing with Kibana.

Components can be installed in a single container or split across multiple containers; the docker-elk image is recommended and supports rich runtime parameters.

Before installation, ensure ports 5601 (Kibana), 9200 (Elasticsearch), and 5044 (Logstash) are free and set the kernel parameter vm.max_map_count to at least 262144:

sudo sysctl -w vm.max_map_count=262144

Clone and start the stack:

git clone https://github.com/deviantony/docker-elk.git cd docker-elk docker-compose up

After the services start, verify Elasticsearch with curl localhost:9200 and open Kibana at http://[serverIP]:5601 (you must create an index pattern before proceeding).

Sending Docker logs to ELK can be done via several methods:

Filebeat : a lightweight shipper that reads JSON log files and forwards them to Logstash or Elasticsearch. Example Filebeat configuration:

prospectors: - paths: - /var/log/containers/ document_type: syslog output: logstash: enabled: true hosts: - elk:5044

Logging driver : Docker’s built‑in syslog driver can route container stdout/stderr to a syslog server, which Logstash can then ingest. Example:

docker run \ --log-driver=syslog \ --log-opt syslog-address=tcp:// :5000 \ --log-opt syslog-facility=daemon \ alpine ash

Logspout : a lightweight router that attaches to the Docker socket and forwards logs to syslog:

sudo docker run -d --name="logspout" \ --volume=/var/run/docker.sock:/var/run/docker.sock \ gliderlabs/logspout syslog+tls:// :5000

Logz.io collector : similar to Logspout but also captures Docker stats and daemon events:

docker run -d --restart=always \ -v /var/run/docker.sock:/var/run/docker.sock \ logzio/logzio-docker -t

Data persistence is handled by Elasticsearch; the default data directory is /var/lib/elasticsearch . Logstash configuration (input, filter, output) is crucial for adding context to container logs; examples include Beats input for Filebeat, syslog input for logging drivers, and Elasticsearch output.

After updating Logstash config, restart the Logstash container and verify indices with curl 'localhost:9200/_cat/indices?v' . Then open Kibana, create the logstash-* index pattern, and explore the logs.

In conclusion, there is no perfect Docker logging solution; each approach—logging drivers, Filebeat, or SaaS platforms—has trade‑offs, but Logz.io’s collector offers a comprehensive pipeline, and Dockerbeat can be used for additional metrics.

The next article in this series will cover analyzing and visualizing Docker logs in Kibana.

monitoringDockerloggingELKLogstashKibanaFilebeat
DevOps
Written by

DevOps

Share premium content and events on trends, applications, and practices in development efficiency, AI and related technologies. The IDCF International DevOps Coach Federation trains end‑to‑end development‑efficiency talent, linking high‑performance organizations and individuals to achieve excellence.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.