Common ELK Deployment Architectures and Solutions for Log Management
This article introduces the ELK stack’s core components, compares four typical deployment architectures—including Logstash‑only, Filebeat‑based, and Kafka‑enhanced setups—discusses their trade‑offs, and provides practical configurations and solutions for multiline log merging, timestamp handling, and module‑specific filtering.
ELK (Elasticsearch, Logstash, Kibana, Beats) is a popular centralized logging solution. The stack consists of Beats (e.g., Filebeat) for data collection, Logstash for processing, Elasticsearch for storage and search, and Kibana for visualization.
Common ELK Deployment Architectures
1. Logstash as Log Collector
Each application server runs a Logstash instance that collects logs, filters and formats them, then forwards the data to Elasticsearch for storage and Kibana for visualization. This approach is resource‑intensive because Logstash consumes significant CPU and memory on the application servers.
2. Filebeat as Log Collector
Filebeat replaces Logstash on the application side. It is lightweight, consumes far fewer resources, and is often paired with Logstash for further processing. This is currently the most widely used architecture.
3. Architecture with a Caching Queue
On top of the Filebeat‑Logstash setup, a message queue such as Kafka is introduced. Filebeat sends logs to Kafka; Logstash reads from Kafka, reducing load on Elasticsearch and providing back‑pressure handling for high‑volume log streams.
4. Summary of the Three Architectures
Because of its resource consumption, the pure Logstash collector is rarely used today. The Filebeat‑Logstash architecture is the default choice, while the Kafka‑enhanced version is only needed for very large data volumes or specific reliability requirements.
Common Problems and Solutions
1. Multiline Log Merging
Logs that span multiple lines need to be merged into a single event. The solution is to use the multiline plugin in Filebeat or Logstash, depending on the deployment architecture.
Filebeat multiline configuration:
filebeat.prospectors:
-
paths:
- /home/project/elk/logs/test.log
input_type: log
multiline:
pattern: '^\['
negate: true
match: after
output:
logstash:
hosts: ["localhost:5044"]Key parameters:
pattern – regular expression to identify the start of a new log entry.
negate – true means lines that do NOT match the pattern are appended to the previous line.
match – "after" appends to the previous line’s end; "before" would prepend.
Logstash multiline configuration:
input {
beats {
port => 5044
}
}
filter {
multiline {
pattern => "%{LOGLEVEL}\s*]"
negate => true
what => "previous"
}
}
output {
elasticsearch {
hosts => "localhost:9200"
}
}2. Replacing Kibana’s @timestamp with Log‑Generated Time
By default Kibana shows the ingestion time. To use the timestamp embedded in the log message, combine the grok filter with the date filter in Logstash.
filter {
grok {
match => ["message", "(?
%{YEAR}%{MONTHNUM}%{MONTHDAY}\s+%{TIME})"]
}
date {
match => ["customer_time", "yyyyMMdd HH:mm:ss,SSS"]
target => "@timestamp"
}
}3. Filtering Logs by System Module in Kibana
Add a custom field (e.g., log_from ) in Filebeat to tag logs from different modules, or use document_type to route logs to separate Elasticsearch indices.
filebeat.prospectors:
-
paths:
- /home/project/elk/logs/account.log
input_type: log
multiline:
pattern: '^\['
negate: true
match: after
fields:
log_from: account
-
paths:
- /home/project/elk/logs/customer.log
input_type: log
multiline:
pattern: '^\['
negate: true
match: after
fields:
log_from: customer
output:
logstash:
hosts: ["localhost:5044"]In Logstash output, the index can be set dynamically:
output {
elasticsearch {
hosts => "localhost:9200"
index => "%{type}"
}
}Conclusion
The article presented three ELK deployment architectures for real‑time log analysis, highlighted their advantages and drawbacks, and offered concrete configuration examples for common challenges such as multiline merging, timestamp correction, and module‑level filtering. The Filebeat‑Logstash setup is the most popular, while adding a queue like Kafka is optional for high‑throughput scenarios.
Java Architect Essentials
Committed to sharing quality articles and tutorials to help Java programmers progress from junior to mid-level to senior architect. We curate high-quality learning resources, interview questions, videos, and projects from across the internet to help you systematically improve your Java architecture skills. Follow and reply '1024' to get Java programming resources. Learn together, grow together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.