Operations 8 min read

Collect Nginx Access & Error Logs with Filebeat, Logstash, and Rsyslog

This guide walks through three practical methods for harvesting Nginx access and error logs—directly with Filebeat to Elasticsearch, via Filebeat‑Logstash‑Elasticsearch pipeline, and using Rsyslog to forward logs to Logstash—complete with configuration snippets and visual illustrations.

Efficient Ops
Efficient Ops
Efficient Ops
Collect Nginx Access & Error Logs with Filebeat, Logstash, and Rsyslog

1. Directly send logs from Filebeat to Elasticsearch

Locate

filebeat.yml

in the Filebeat installation directory and configure the log file paths and Elasticsearch output.

<code>- type: log
  enabled: true
  paths:
    - /usr/local/nginx/logs/*.log
</code>

Start Filebeat:

<code>./filebeat -e -c filebeat.yml -d "publish"</code>

Verify the logs appear in Elasticsearch (e.g., via the elasticsearch‑head plugin) and are visualized in Kibana.

2. Send logs from Filebeat to Logstash, then to Elasticsearch

Install Logstash and create

filebeat-pipeline.conf

:

<code>input {
  beats {
    port => "5044"
  }
}
output {
  elasticsearch {
    hosts => ["172.28.65.24:9200"]
  }
  stdout { codec => rubydebug }
}
</code>

Run Logstash:

<code>bin/logstash -f filebeat-pipeline.conf --config.reload.automatic</code>

Modify

filebeat.yml

to disable the Elasticsearch output and enable the Logstash output with the correct host and port.

<code>#output.elasticsearch:
#  hosts: ["localhost:9200"]
output.logstash:
  hosts: ["172.28.65.24:5044"]
</code>

Start Filebeat again and access the Nginx web service; the logs will flow through Logstash into Elasticsearch and be viewable in Kibana.

3. Forward Nginx logs via Rsyslog to Logstash, then to Elasticsearch

When direct Filebeat installation is not possible, configure Nginx to send logs via syslog:

<code>access_log syslog:server=172.28.65.32:514,facility=local7,tag=nginx_access_log,severity=info;
error_log syslog:server=172.28.65.32:514,facility=local7,tag=nginx_error_log,severity=info;
</code>

Set up Logstash to receive syslog input:

<code>input {
  syslog {
    type => "system-syslog"
    port => 514
  }
}
output {
  elasticsearch {
    hosts => ["172.28.65.24:9200"]
    index => "system-syslog-%{+YYYY.MM}"
  }
  stdout { codec => rubydebug }
}
</code>

Start Logstash and verify it listens on port 514. Alternatively, configure Rsyslog on the Nginx host to monitor the log files and forward them:

<code>$ModLoad imfile
$InputFilePollInterval 1
$WorkDirectory /var/spool/rsyslog
$InputFileName /usr/local/nginx/logs/access.log
$InputFileTag nginx-access:
$InputFileStateFile stat-nginx-access
$InputFileSeverity info
$InputRunFileMonitor
$InputFileName /usr/local/nginx/logs/error.log
$InputFileTag nginx-error:
$InputFileStateFile stat-nginx-error
$InputFileSeverity error
*.* @172.28.65.32:514
</code>

Restart Rsyslog, access the Nginx service, and observe the logs appearing in Logstash’s console and Elasticsearch.

The article demonstrates flexible log‑collection strategies using Filebeat, Logstash, and Rsyslog to ingest Nginx access and error logs into an ELK stack.

ElasticsearchDevOpsnginxlog collectionLogstashFilebeatrsyslog
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.