Operations 8 min read

ELK Stack Deployment Architectures, Common Issues, and Solutions

This article introduces the ELK stack, compares three typical deployment architectures—Logstash as collector, Filebeat as collector, and a cache‑queue‑enhanced design—then details practical solutions for multiline log merging, Kibana timestamp handling, and module‑based log filtering, concluding with best‑practice recommendations.

Architect's Guide
Architect's Guide
Architect's Guide
ELK Stack Deployment Architectures, Common Issues, and Solutions

The ELK stack (Beats, Logstash, Elasticsearch, Kibana) is a popular centralized logging solution that enables real‑time collection, storage, and visualization of logs.

1. Common Deployment Architectures

1.1 Logstash as Log Collector

Each application server runs a Logstash instance to collect, filter, and format logs before sending them to Elasticsearch; Kibana visualizes the data. This approach consumes significant resources on the application servers.

1.2 Filebeat as Log Collector

Filebeat, a lightweight data shipper, replaces Logstash on the application side. It is often paired with Logstash downstream and is the most widely used architecture due to its low resource footprint.

1.3 Architecture with a Cache Queue

Building on the Filebeat approach, a Redis (or other message queue) buffer is introduced. Filebeat forwards logs to the queue, and Logstash reads from it, improving load balancing and data safety for high‑volume scenarios.

2. Problems and Solutions

2.1 Multiline Log Merging

When a single logical log entry spans multiple lines, use the multiline plugin in Filebeat or Logstash to merge them. Configuration differs by architecture:

pattern: '\[' negate: true match: after

In Filebeat, set negate: true and match: after to merge lines that do not match the pattern to the previous line’s end.

2.2 Replacing Kibana’s @timestamp with Log Timestamp

Use Logstash’s grok filter together with the date plugin to extract the timestamp from the log message and overwrite the @timestamp field.

# Example grok pattern CUSTOMER_TIME %{YEAR}%{MONTHNUM}%{MONTHDAY}\s+%{TIME}

2.3 Filtering Logs by System Module in Kibana

Add a custom field (e.g., log_from ) to identify the source module, or create separate Elasticsearch indices per module and configure Kibana index patterns accordingly.

Filebeat example (adding log_from field):

# filebeat.yml snippet fields: log_from: "moduleA"

Logstash output example (dynamic index based on document_type ):

# logstash output output { elasticsearch { hosts => ["localhost:9200"] index => "%{type}" } }

3. Summary

The second architecture—Filebeat as the collector combined with optional Logstash processing—is currently the most popular due to its efficiency. The article also provides practical guidance on handling multiline logs, aligning timestamps, and isolating module‑specific logs, illustrating how ELK can serve both log analysis and broader monitoring needs.

ELKLog ManagementLogstashKibanaFilebeatMultilinedeployment architecture
Architect's Guide
Written by

Architect's Guide

Dedicated to sharing programmer-architect skills—Java backend, system, microservice, and distributed architectures—to help you become a senior architect.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.