Master Filebeat 7.7: What It Is, How It Works, and How to Deploy It
This article explains Filebeat's role as a lightweight log shipper, its relationship to the Beats family, internal architecture, installation steps, configuration of inputs, outputs, keystore usage, module activation, and practical examples for sending logs to Logstash or Elasticsearch.
What is Filebeat?
Filebeat is a lightweight shipper for forwarding and centralizing log data. It monitors specified log files, collects events, and forwards them to Elasticsearch or Logstash for indexing.
Filebeat and Beats
Filebeat belongs to the Beats family, which includes six lightweight data shippers: Packetbeat, Metricbeat, Filebeat, Winlogbeat, Auditbeat, and Heartbeat. Beats are designed to consume far fewer resources than Logstash.
Filebeat Architecture
Filebeat consists of two main components: inputs and harvesters. Inputs define the sources to watch, while harvesters read each file line‑by‑line, track the read offset, and send events to the libbeat output pipeline.
How Filebeat Works
When started, Filebeat launches one or more inputs that scan configured paths. For each discovered file a harvester is started; the harvester reads new lines, updates a registry file with the last offset, and forwards events to the configured output. The registry ensures that on restart Filebeat continues from the last known position.
Filebeat guarantees at‑least‑once delivery by persisting the delivery state in the registry. If an output is unavailable, events are retried until the output acknowledges receipt.
Installation (tarball)
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.7.0-linux-x86_64.tar.gz
tar -xzvf filebeat-7.7.0-linux-x86_64.tar.gzBasic Commands
filebeat export # export
filebeat run # start (default)
filebeat test # test configuration
filebeat keystore # manage keystore
filebeat modules # module management
filebeat setup # initial setupConfiguration Overview
The main configuration file is filebeat.yml. Key sections include inputs, outputs, and optional modules.
Inputs (example for log files)
type: log
enabled: true
paths:
- /var/log/*.log
exclude_lines: ['^DBG']
include_lines: ['^ERR', '^WARN']
harvester_buffer_size: 16384
max_bytes: 10485760
exclude_files: ['\.gz$']
close_inactive: 5m
tail_files: trueKeystore Usage
Sensitive values such as passwords can be stored in the Filebeat keystore and referenced with ${ES_PWD}.
# create keystore
filebeat keystore create
# add a key
filebeat keystore add ES_PWD
# list keys
filebeat keystore listOutputs
Common outputs are Elasticsearch and Logstash.
Logstash output example
output.logstash:
hosts: ["192.168.110.130:5044","192.168.110.131:5044"]
loadbalance: trueElasticsearch output example
output.elasticsearch:
hosts: ["192.168.110.130:9200","192.168.110.131:9200"]
username: "elastic"
password: "${ES_PWD}"Modules
Modules provide pre‑built configurations. The Elasticsearch module can be enabled to parse ES slow‑log files.
# enable module
./filebeat modules enable elasticsearch
# list enabled modules
./filebeat modules list
# setup dashboards
./filebeat setup -eRunning Filebeat
./filebeat -eAfter starting, Filebeat creates an index named filebeat-%{[beat.version]}-%{+yyyy.MM.dd} in Elasticsearch, where parsed log events become searchable.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Open Source Linux
Focused on sharing Linux/Unix content, covering fundamentals, system development, network programming, automation/operations, cloud computing, and related professional knowledge.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
