Operations 22 min read

Comprehensive Guide to Log Analysis with the ELK Stack and Docker Deployment

This article provides a detailed overview of log analysis, explains its importance, introduces major collection tools such as Splunk and ELK, and walks through step‑by‑step Docker‑based installation and configuration of Elasticsearch, Logstash, Kibana, and a Spring Boot application for centralized logging.

Selected Java Interview Questions
Selected Java Interview Questions
Selected Java Interview Questions
Comprehensive Guide to Log Analysis with the ELK Stack and Docker Deployment

Overview of Log Analysis

Log analysis is the primary method for operations engineers to troubleshoot system failures, monitor server load, and identify performance or security issues by examining system, application, and security logs.

Functions of Log Analysis

Real‑time monitoring of system status.

Bug location and debugging.

Website traffic monitoring.

SQL statement optimization.

Main Collection Tools

日志易 : a paid domestic monitoring and audit solution.

Splunk : commercial software composed of Indexer , Search Head , and Forwarder . Indexer stores and indexes data, similar to Elasticsearch . Search Head provides UI (Kibana‑like) and distributed search. Forwarder forwards data, comparable to Logstash or filebeat .

ELK : an open‑source log analysis platform consisting of Elasticsearch , Logstash , and Kibana .

Centralized Log System Characteristics

(Illustrated in the original article with diagrams.)

ELK Overview

ELK is an open‑source data analysis platform that processes massive log data, offering real‑time search, visualization, and analysis.

Elasticsearch : distributed search and analytics engine.

Logstash : data collection, processing, and transformation tool.

Kibana : data visualization interface.

Logstash Data Flow

Logstash pipelines consist of three plugin stages: Inputs , Filters , and Outputs . Codecs can be used in inputs/outputs for data format conversion.

Building the ELK Platform (Docker‑Based)

# Set hostname
hostnamectl set-hostname elk
# Configure network interface
nmcli connection modify ens160 ipv4.method manual ipv4.addresses 192.168.8.111/24 ipv4.gateway 192.168.8.254 ipv4.dns 192.168.8.254 connection.autoconnect yes
nmcli connection up ens160

Enable IP forwarding for Docker networking:

# Enable routing
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
sysctl -p

Deploy Elasticsearch

# Create Docker network
docker network create -d bridge elk
# Pull and run Elasticsearch container
docker pull elasticsearch:7.12.1
docker run -d --name es --net elk -P -e "discovery.type=single-node" elasticsearch:7.12.1

Mount configuration files and adjust CORS settings in elasticsearch.yml :

http.cors.enabled: true
http.cors.allow-origin: "*"

Deploy Kibana

# Pull Kibana image
docker pull kibana:7.12.1
# Run Kibana container linked to Elasticsearch
docker run -d --name kibana --net elk -P -e "ELASTICSEARCH_HOSTS=http://es:9200" -e "I18N_LOCALE=zh-CN" kibana:7.12.1

Deploy Logstash

# Pull Logstash image
docker pull logstash:7.12.1
# Run container with mounted config, data, and pipeline directories
docker run -d --name logstash --net elk \
  -p 5044:5044 -p 9600:9600 \
  -v /usr/local/elk/logstash/config/:/usr/share/logstash/config \
  -v /usr/local/elk/logstash/pipeline/:/usr/share/logstash/pipeline \
  logstash:7.12.1

Example logstash.yml snippet:

http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: ["http://172.18.0.2:9200"]

Example logstash.conf pipeline:

input {
  tcp {
    mode => "server"
    host => "0.0.0.0"
    port => 5044
    codec => json_lines
  }
}
output {
  elasticsearch {
    hosts => ["http://172.18.0.2:9200"]
    index => "elk"
    codec => "json"
  }
  stdout { codec => rubydebug }
}

Spring Boot Application Integration

Add logstash-logback-encoder dependency and configure logback‑spring.xml to send logs to Logstash:

<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
  <destination>192.168.8.111:5044</destination>
  <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder">
    <providers>
      <timestamp><timeZone>UTC</timeZone></timestamp>
      <pattern>{ "index":"elk", "appname":"${spring.application.name}", "timestamp":"%d{yyyy-MM-dd HH:mm:ss.SSS}", "thread":"%thread", "level":"%level", "logger_name":"%logger", "message":"%msg", "stack_trace":"%exception" }</pattern>
    </providers>
  </encoder>
</appender>
<root level="INFO">
  <appender-ref ref="LOGSTASH"/>
  <appender-ref ref="CONSOLE"/>
</root>

Run the Spring Boot service, call http://localhost:8080/index , and verify the logs appear in Kibana.

Testing and Verification

Access Elasticsearch at http://192.168.8.111:9200/ , Kibana at http://192.168.8.111:5601/ , and optionally the Elasticsearch‑head UI at http://192.168.8.111:9100/ to confirm data ingestion and visualization.

DockeroperationsElasticsearchELKlog analysisLogstashKibana
Selected Java Interview Questions
Written by

Selected Java Interview Questions

A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.