Build a Real‑Time ELK Log Analysis Platform on Ubuntu: Step‑by‑Step Guide
This tutorial walks through setting up a unified real‑time ELK log analysis platform on Ubuntu, covering installation and configuration of Logstash, Elasticsearch, Kibana, integration with Spring Boot and Nginx logs, and managing services with Supervisor for reliable operation.
During troubleshooting, log queries are essential. In microservice architectures logs are scattered, making retrieval hard. A unified real‑time log analysis platform like ELK can greatly improve efficiency.
ELK Overview
ELK is an open‑source real‑time log analysis platform consisting of Elasticsearch, Logstash and Kibana.
Logstash
Logstash collects server logs, providing a real‑time pipeline. It can unify data from various sources and standardize it for the chosen destination.
Logstash processing includes three parts:
Input: collects data from many sources (File, Syslog, MySQL, message queues, etc.).
Filter: parses and transforms data, building structured fields.
Output: sends data to Elasticsearch or other destinations.
Elasticsearch
Elasticsearch (ES) is a distributed RESTful search and analytics engine with features such as:
Search: supports structured, unstructured, geo, metric queries.
Analytics: aggregations for trends and patterns.
Speed: handles billions of records with millisecond response.
Scalability: runs on a laptop or on hundreds of servers with petabytes of data.
Resilience: designed for distributed environments.
Flexibility: supports numeric, text, geo, structured and unstructured data.
Kibana
Kibana visualizes massive data in a browser‑based UI, allowing quick creation and sharing of dynamic dashboards to monitor Elasticsearch data in real time. Installation is straightforward and requires no code.
ELK Implementation Scheme
When services are deployed on multiple machines, log collection is critical. The solution: Logstash on each service (Shipper) forwards logs to a Redis queue; another Logstash (Indexer) reads from Redis, processes logs, and stores them in Elasticsearch; Kibana reads from Elasticsearch and displays them.
ELK Platform Setup
Prerequisites:
One Ubuntu machine (or VM). For this tutorial Elasticsearch cluster setup is omitted; Logstash, Elasticsearch and Kibana are installed on the same machine.
JDK 1.7 or higher.
Download installation packages for Logstash, Elasticsearch and Kibana.
Install Logstash
<code>tar -xzvf logstash-7.3.0.tar.gz</code>Start Logstash with a simple pipeline that reads from stdin and writes to stdout:
<code>cd logstash-7.3.0
bin/logstash -e 'input { stdin {} } output { stdout {} }'</code>Install Elasticsearch
<code>tar -xzvf elasticsearch-7.3.0-linux-x86_64.tar.gz</code>Start Elasticsearch:
<code>cd elasticsearch-7.3.0
bin/elasticsearch</code>Common issues:
Insufficient memory – adjust
jvm.optionsto fit available RAM.
Running as root – start Elasticsearch with a non‑root user.
Verify startup with
curl http://localhost:9200and check the JSON response.
Install Kibana
<code>tar -xzvf kibana-7.3.0-linux-x86_64.tar.gz</code>Edit
config/kibana.ymlto point to Elasticsearch and allow remote access:
<code>elasticsearch.hosts: "http://ip:9200"
server.host: "0.0.0.0"
elasticsearch.username: "es"
elasticsearch.password: "es"</code>Start Kibana and access
http://ip:5601to confirm successful launch.
Using ELK with Spring Boot
Create a Spring Boot project and add a
spring-logback.xmlconfiguration that defines a
ROLLING_FILEappender with a custom pattern.
Package and deploy the application on Ubuntu, then verify the log file
/log/sb-log.log.
Configure Shipper Logstash
Write a Logstash config that reads the Spring Boot log file and outputs to a Redis channel:
<code>input {
file {
path => ["/log/sb-log.log"]
}
}
output {
redis {
host => "10.140.45.190"
port => 6379
db => 8
data_type => "channel"
key => "logstash_list_0"
}
}</code>Configure Indexer Logstash
Read from Redis, parse logs with Grok, and store them in Elasticsearch:
<code>input {
redis {
host => "192.168.142.131"
port => 6379
db => 8
data_type => "channel"
key => "sb-logback"
}
}
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:time} \[%{NOTSPACE:threadName}\] %{LOGLEVEL:level} %{DATA:logger} %{NOTSPACE:applicationName} -(?:.*=%{NUMBER:timetaken}ms|)" }
}
}
output {
stdout {}
elasticsearch {
hosts => "localhost:9200"
index => "logback"
}
}</code>The Grok pattern extracts timestamp, thread name, log level, logger, application name and request duration.
Using ELK with Nginx
Collect Nginx access logs (default at
/var/log/nginx/access.log) and define a Grok pattern:
<code>%{IPV4:ip} - - \[%{HTTPDATE:time}\] "%{NOTSPACE:method} %{DATA:requestUrl} HTTP/%{NUMBER:httpVersion}" %{NUMBER:httpStatus} %{NUMBER:bytes} "%{DATA:referer}" "%{DATA:agent}"</code>Update the Indexer Logstash configuration to handle both
logbackand
nginxtypes, using conditional filters and outputs.
Running ELK as Daemons
Use Supervisor to manage Elasticsearch, Logstash and Kibana as background services. Example
supervisord.confentries:
<code>[program:elasticsearch]
environment=JAVA_HOME="/usr/java/jdk1.8.0_221/"
directory=/home/elk/elk/elasticsearch
user=elk
command=/home/elk/elk/elasticsearch/bin/elasticsearch
[program:logstash]
environment=JAVA_HOME="/usr/java/jdk1.8.0_221/"
directory=/home/elk/elk/logstash
user=elk
command=/home/elk/elk/logstash/bin/logstash -f /home/elk/elk/logstash/indexer-logstash.conf
[program:kibana]
environment=LS_HEAP_SIZE=5000m
directory=/home/elk/elk/kibana
user=elk
command=/home/elk/elk/kibana/bin/kibana</code>Reload Supervisor with
sudo supervisorctl reloadto start all components automatically on boot.
Conclusion
This tutorial introduced ELK, demonstrated how to build a real‑time log analysis platform, and showed integration with Spring Boot and Nginx logs.
Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.