Operations 13 min read

How to Monitor Nginx Logs with ELK: From Logstash Setup to Kibana Dashboard

This step‑by‑step guide shows how to collect, parse, and visualize Nginx access logs using the ELK stack, configure Logstash pipelines, set up Elasticsearch indices, proxy Kibana through Nginx, and secure access with HTTP basic authentication.

Efficient Ops
Efficient Ops
Efficient Ops
How to Monitor Nginx Logs with ELK: From Logstash Setup to Kibana Dashboard

1. Introduction

This article explains how to monitor Nginx logs, analyze them with Logstash, and display visual charts in Kibana, while using Nginx as a reverse proxy with HTTP basic authentication for access control.

Note: The environment assumes Elasticsearch, Logstash, and Kibana are already installed, along with a Java JDK.

Example Nginx log line:

218.75.177.193 - - [03/Sep/2016:03:34:06 +0800] "POST /newRelease/everyoneLearnAjax HTTP/1.1" 200 370 "http://www.xxxxx.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36" "123.45.67.89"

Nginx log_format definition:

log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

2. Configure Logstash

2.1 Create configuration file

[root@log-monitor ~]# cat /etc/logstash/conf.d/nginx_access.conf
input {
    file {
        path => [ "/data/nginx-logs/access.log" ]
        start_position => "beginning"
        ignore_older => 0
    }
}

filter {
    grok {
        match => { "message" => "%{NGINXACCESS}" }
    }
    geoip {
        source => "http_x_forwarded_for"
        target => "geoip"
        database => "/etc/logstash/GeoLiteCity.dat"
        add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
        add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
    }
    mutate {
        convert => [ "[geoip][coordinates]", "float" ]
        convert => [ "response", "integer" ]
        convert => [ "bytes", "integer" ]
        replace => { "type" => "nginx_access" }
        remove_field => "message"
    }
    date {
        match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z"]
    }
    mutate { remove_field => "timestamp" }
}
output {
    elasticsearch {
        hosts => ["127.0.0.1:9200"]
        index => "logstash-nginx-access-%{+YYYY.MM.dd}"
    }
    stdout { codec => rubydebug }
}

2.2 Explanation of sections

Input

path

: path to the Nginx access log file.

start_position

: read from the beginning of the file.

ignore_older

: set to 0 to disable age‑based ignoring.

Filter

grok

: parses the log line using the

NGINXACCESS

pattern.

geoip

: adds geographic information based on the client IP.

mutate

: converts fields to proper types, renames the type, and removes the raw

message

field.

date

: converts the timestamp string to a proper date field.

Output

elasticsearch

: sends processed events to Elasticsearch, creating an index named

logstash-nginx-access-YYYY.MM.DD

.

stdout

: prints events to the console for debugging.

3. Create Grok Pattern

Create a directory for custom patterns and add the Nginx pattern file:

[root@log-monitor ~]# mkdir -pv /opt/logstash/patterns
[root@log-monitor ~]# cat /opt/logstash/patterns/nginx
NGUSERNAME [a-zA-Z\.\@\-\+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:clientip} - %{NOTSPACE:remote_user} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} "%{IPV4:http_x_forwarded_for}"

Note: The

http_x_forwarded_for

field captures the real client IP when a CDN proxy is used.

4. Prepare GeoIP Database

[root@log-monitor ~]# wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz
[root@log-monitor ~]# gzip -d GeoLiteCity.dat.gz
[root@log-monitor ~]# mv GeoLiteCity.dat /etc/logstash/.

5. Test Logstash Configuration

[root@log-monitor ~]# /opt/logstash/bin/logstash -t -f /etc/logstash/conf.d/nginx_access.conf
Configuration OK
The -t -f options must appear in this order.

6. Configure Elasticsearch

Update

/etc/elasticsearch/elasticsearch.yml

(remove comments):

node.name: es-1
path.data: /data/elasticsearch/
network.host: 127.0.0.1
http.port: 9200

Create the data directory and set permissions:

[root@log-monitor ~]# mkdir -pv /data/elasticsearch
[root@log-monitor ~]# chown -R elasticsearch.elasticsearch /data/elasticsearch/

Restart services and verify they are listening:

[root@log-monitor ~]# systemctl restart elasticsearch
[root@log-monitor ~]# systemctl restart logstash
[root@log-monitor ~]# netstat -ulntp | grep java

7. Install Nginx and Proxy Kibana

Install Nginx:

[root@log-monitor ~]# wget https://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.10.0-1.el7.ngx.x86_64.rpm
[root@log-monitor ~]# yum localinstall nginx-1.10.0-1.el7.ngx.x86_64.rpm -y

Create

/etc/nginx/conf.d/elk.conf

to forward requests to Kibana and enable basic auth:

upstream elk {
    ip_hash;
    server 172.17.0.1:5601 max_fails=3 fail_timeout=30s;
    server 172.17.0.1:5601 max_fails=3 fail_timeout=30s;
}

server {
    listen 80;
    server_name localhost;
    server_tokens off;
    client_body_timeout 5s;
    client_header_timeout 5s;
    location / {
        proxy_pass http://elk/;
        auth_basic "ELK Private,Don't try GJ!";
        auth_basic_user_file /etc/nginx/.htpasswd;
    }
}

Install

httpd-tools

and create a user for basic auth:

[root@log-monitor ~]# yum install httpd-tools -y
[root@log-monitor ~]# htpasswd -cm /etc/nginx/.htpasswd elk

Start Nginx and open port 8888 for external access:

[root@log-monitor ~]# systemctl start nginx
[root@log-monitor ~]# iptables -I INPUT -p tcp -m state --state NEW --dport 8888 -j ACCEPT

After logging in with the

elk

user, Kibana becomes reachable at

http://your_host:8888

. Add the index

logstash-nginx-access-*

in Kibana, set it as the default, and explore the imported data in the Discover view.

8. Summary

Advantages of the ELK stack for operations:

Facilitates forensic analysis during network attacks.

Centralizes log collection and storage for later analysis.

Provides data‑driven insights for system and business optimization.

MonitoringnginxELKlog analysisLogstashKibana
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.