Operations 12 min read

How to Monitor Nginx Logs with ELK: From Logstash Config to Kibana Dashboards

This guide walks through setting up an ELK stack to collect, parse, and visualize Nginx access logs, covering Logstash configuration, Grok patterns, Elasticsearch setup, Nginx proxy with basic authentication, and creating Kibana dashboards for log analysis.

Efficient Ops
Efficient Ops
Efficient Ops
How to Monitor Nginx Logs with ELK: From Logstash Config to Kibana Dashboards

Introduction

This article explains how to monitor Nginx logs, analyze them with Logstash, and display the results in Kibana, using HTTP basic authentication for access control.

Note: The environment assumes Elasticsearch, Logstash, and Kibana are already installed, along with a Java JDK.

Sample Nginx log line:

<code>218.75.177.193 - - [03/Sep/2016:03:34:06 +0800] "POST /newRelease/everyoneLearnAjax HTTP/1.1" 200 370 "http://www.xxxxx.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36"</code>

Nginx log_format definition:

<code>log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
'                      '$status $body_bytes_sent "$http_referer" '
'                      '"$http_user_agent" "$http_x_forwarded_for"';</code>

Configure Logstash

1. Create a new configuration file

/etc/logstash/conf.d/nginx_access.conf

with the following content:

<code># cat /etc/logstash/conf.d/nginx_access.conf
input {
  file {
    path => [ "/data/nginx-logs/access.log" ]
    start_position => "beginning"
    ignore_older => 0
  }
}
filter {
  grok {
    match => { "message" => "%{NGINXACCESS}" }
  }
  geoip {
    source => "http_x_forwarded_for"
    target => "geoip"
    database => "/etc/logstash/GeoLiteCity.dat"
    add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
    add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
  }
  mutate {
    convert => [ "[geoip][coordinates]", "float" ]
    convert => [ "response", "integer" ]
    convert => [ "bytes", "integer" ]
    replace => { "type" => "nginx_access" }
    remove_field => "message"
  }
  date {
    match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
  mutate {
    remove_field => "timestamp"
  }
}
output {
  elasticsearch {
    hosts => ["127.0.0.1:9200"]
    index => "logstash-nginx-access-%{+YYYY.MM.dd}"
  }
  stdout { codec => rubydebug }
}</code>

Key sections explained:

input : reads the Nginx access log file from

/data/nginx-logs/access.log

.

filter : uses

grok

with the

NGINXACCESS

pattern, enriches records with

geoip

, converts fields to proper types, and formats timestamps.

output : sends structured events to Elasticsearch and prints them to the console.

Grok Pattern

<code># mkdir -pv /opt/logstash/patterns
# cat /opt/logstash/patterns/nginx
NGUSERNAME [a-zA-Z\.\@\-\+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:clientip} - %{NOTSPACE:remote_user} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} "%{IPV4:http_x_forwarded_for}"</code>

Configure Elasticsearch

<code># egrep -v '^#|^$' /etc/elasticsearch/elasticsearch.yml
node.name: es-1
path.data: /data/elasticsearch/
network.host: 127.0.0.1
http.port: 9200</code>

Create the data directory and set permissions:

<code># mkdir -pv /data/elasticsearch
# chown -R elasticsearch.elasticsearch /data/elasticsearch</code>

Restart services and verify they are listening:

<code># systemctl restart elasticsearch
# systemctl restart logstash
# netstat -ulntp | grep java</code>

Install Nginx and Proxy Kibana

<code># wget https://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.10.0-1.el7.ngx.x86_64.rpm
# yum localinstall nginx-1.10.0-1.el7.ngx.x86_64.rpm -y
# cat /etc/nginx/conf.d/elk.conf
upstream elk {
    ip_hash;
    server 172.17.0.1:5601 max_fails=3 fail_timeout=30s;
    server 172.17.0.1:5601 max_fails=3 fail_timeout=30s;
}
server {
    listen 80;
    server_name localhost;
    server_tokens off;
    client_body_timeout 5s;
    client_header_timeout 5s;
    location / {
        proxy_pass http://elk/;
        auth_basic "ELK Private,Don't try GJ!";
        auth_basic_user_file /etc/nginx/.htpasswd;
    }
}</code>

Create HTTP basic‑auth user:

<code># yum install httpd-tools -y
# htpasswd -cm /etc/nginx/.htpasswd elk</code>

Start Nginx and open port 8888:

<code># systemctl start nginx
# iptables -I INPUT -p tcp -m state --state NEW --dport 8888 -j ACCEPT</code>

Result

After logging into Kibana with the created user, add the index pattern

logstash-nginx-access-*

, set it as the default, and explore dashboards that show client IP locations, total request counts, top URLs, error trends, and more.

ElasticsearchNginxELKLog AnalysisLog MonitoringLogstashKibana
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.