Operations 6 min read

Introduction to Elastic Stack and Building an Automated Log Monitoring System

This guide explains how to combine Tencent Cloud Elasticsearch with the Elastic Stack—Filebeat, Logstash, and Kibana—to automatically collect JSON‑formatted logs from development workflows, route them to dynamically created indices, and visualize status dashboards, while highlighting best‑practice tips for schema design, deduplication, and future scaling.

Tencent Cloud Developer
Tencent Cloud Developer
Tencent Cloud Developer
Introduction to Elastic Stack and Building an Automated Log Monitoring System

Elastic Stack is a suite of software from Elastic, including Elasticsearch, Logstash, Kibana, and Beats. Elasticsearch is a distributed search engine for storing and querying data, Logstash is a dynamic data collection pipeline for cleaning and formatting logs, Kibana provides rich visualizations, and Beats (e.g., Filebeat) ship log files to Elasticsearch or Logstash.

The stack is widely used for log management, metric analysis, performance monitoring, and application search. This article demonstrates how to use Tencent Cloud Elasticsearch together with Elastic Stack to build an automated monitoring and statistics system for workflow processes.

Preparation

The log message protocol is standardized to JSON, eliminating the need for Logstash to perform format conversion. The protocol defines fields such as log_type (used as the Elasticsearch index), phase , finish_time , and miles for monitoring dimensions.

Data Ingestion

Logs are generated from two sources: completed development (periodic database scans) and ongoing development (developers emit logs following the defined JSON protocol). Both types are sent by Filebeat to Logstash, which forwards them to Elasticsearch.

Logstash Configuration (conf)

input {
  beats {
    port => 8888
    codec => "json"
  }
}

output {
  elasticsearch {
    hosts => ["
:
"]
    index => "%{log_type}"
  }
  stdout {
    codec => rubydebug
  }
}

This configuration makes Logstash listen on port 8888, decode JSON messages, route them to the appropriate Elasticsearch index based on log_type , and also print them to the console.

Start Logstash with:

./bin/logstash -c logstash.conf

(optionally using

nohup

).

Filebeat Configuration (yml)

filebeat.inputs:
- type: log
  enabled: true
  paths:
  - /usr/local/app/wsd_cron_agent/script/logs/*.log
output.logstash:
  hosts: ["
:
"]

Filebeat scans the specified log directory, sending new log files to Logstash.

Start Filebeat with:

./filebeat -e -c filebeat.yml

(optionally using

nohup

).

Once both agents are running, logs flow automatically into Elasticsearch.

Kibana Visualization

To visualize the data, first create an index pattern in Kibana (Management → Index Patterns → Create Index Pattern) using the index name defined by log_type . Then use the Visualize feature to build dashboards that display the status of each automated workflow.

Key Considerations & Improvements

Elasticsearch is a NoSQL store; it cannot perform join queries, so all required information must be included in a single document.

Avoid generating duplicate log messages.

Future enhancements may include alerting plugins and better load balancing between Filebeat and Logstash.

Consider adding Grok processing in Logstash to allow developers to emit raw logs and let Logstash handle formatting.

Potential to improve load balancing and performance of Filebeat and Logstash.

ElasticsearchLog MonitoringElastic StackLogstashBeatsKibana
Tencent Cloud Developer
Written by

Tencent Cloud Developer

Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.