Operations 19 min read

Comprehensive Guide to Installing and Configuring Filebeat 7.7.0 for Log Collection

This article provides a detailed tutorial on Filebeat 7.7.0, covering its purpose, architecture, installation via tarball, essential commands, configuration of inputs and outputs (including Logstash and Elasticsearch), keystore usage, module activation, and step‑by‑step verification of log collection in an ELK stack.

Architecture Digest
Architecture Digest
Architecture Digest
Comprehensive Guide to Installing and Configuring Filebeat 7.7.0 for Log Collection

Filebeat 7.7.0 is a lightweight shipper used to forward and centralize log data. It belongs to the Beats family, which also includes Packetbeat, Metricbeat, Winlogbeat, Auditbeat, and Heartbeat, offering low‑resource alternatives to Logstash for various data sources.

What Filebeat does : It monitors specified log files, reads new lines, and forwards events to Elasticsearch or Logstash. Its architecture consists of inputs (which discover files) and harvesters (which read file contents and send events to the libbeat core).

Installation (tarball method) :

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.7.0-linux-x86_64.tar.gz
tar -xzvf filebeat-7.7.0-linux-x86_64.tar.gz

After extraction, the main configuration file is filebeat.yml . Basic commands include:

export   # export environment variables
run      # start Filebeat (default)
test     # test configuration
keystore # manage secret store
modules  # manage modules
setup    # set up initial environment

Key configuration concepts :

Inputs define which log files to read (e.g., type: log , paths: [/var/log/*.log] ).

Harvester options control file handling, such as close_inactive , scan_frequency , and tail_files .

Keystore stores sensitive values (e.g., Elasticsearch passwords) and can be referenced as ${ES_PWD} in the config.

Outputs support Elasticsearch, Logstash, Kafka, Redis, File, Console, etc.; the most common are Elasticsearch and Logstash.

Example: Logstash output :

output.logstash:
  hosts: ["192.168.110.130:5044", "192.168.110.131:5044", "192.168.110.132:5044", "192.168.110.133:5044"]
  loadbalance: true

Start Filebeat with ./filebeat -e , then configure Logstash to listen on port 5044 and forward to Elasticsearch.

Example: Elasticsearch output :

output.elasticsearch:
  hosts: ["192.168.110.130:9200", "192.168.110.131:9200"]
  username: "elastic"
  password: "${ES_PWD}"

After launching, Filebeat creates an index named filebeat-%{[beat.version]}-%{+yyyy.MM.dd} in Elasticsearch.

Modules : Enable the Elasticsearch module to parse slow‑log queries. Steps include editing modules.d/elasticsearch.yml , running ./filebeat modules enable elasticsearch , initializing with ./filebeat setup -e , and finally starting Filebeat.

Verification can be done via Kibana dashboards (configured with setup.kibana.host ) to ensure logs are correctly ingested and visualized.

Original article link: https://www.cnblogs.com/zsql/p/13137833.html

configurationLinuxELKlog collectionElastic StackFilebeat
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.