How to Combine ELK and Zabbix for Real‑Time Log Alerting
This guide explains how to integrate ELK's Logstash with Zabbix using the logstash‑output‑zabbix plugin, covering installation, configuration of Logstash pipelines, Zabbix template and trigger setup, and testing the end‑to‑end alerting workflow.
1. What is the relationship between ELK and Zabbix?
ELK (Elasticsearch, Logstash, Kibana) is a log‑collection suite that can gather system, website, and application logs, filter and cleanse them, and store them centrally for real‑time search and analysis.
When you need to extract abnormal log entries (warnings, errors, failures) and notify operators immediately, Zabbix can be used. Logstash reads logs, filters for keywords such as
error,
failed,
warning, and forwards matching events to Zabbix via the
logstash-output-zabbixplugin, which then triggers alerts.
2. Using Logstash with the Zabbix plugin
Logstash supports many output plugins; the
logstash-output-zabbixplugin integrates Logstash with Zabbix. Install it with:
<code>[root@elk-master bin]# /usr/share/logstash/bin/logstash-plugin install logstash-output-zabbix</code>Common plugin commands:
List installed plugins:
/usr/share/logstash/bin/logstash-plugin listList with details:
/usr/share/logstash/bin/logstash-plugin list --verboseList plugins matching a pattern:
/usr/share/logstash/bin/logstash-plugin list "*namefragment*"List plugins of a specific group (e.g., output):
/usr/share/logstash/bin/logstash-plugin --group outputInstall a plugin (e.g., Kafka):
/usr/share/logstash/bin/logstash-plugin install logstash-output-kafkaUpdate all plugins:
/usr/share/logstash/bin/logstash-plugin updateUpdate a specific plugin:
/usr/share/logstash/bin/logstash-plugin update logstash-output-kafkaRemove a plugin:
/usr/share/logstash/bin/logstash-plugin remove logstash-output-kafka3. Example of using logstash-output-zabbix
After installing the plugin, add the following snippet to a Logstash configuration file:
<code>zabbix {
zabbix_host => "[@metadata][zabbix_host]"
zabbix_key => "[@metadata][zabbix_key]"
zabbix_server_host => "x.x.x.x"
zabbix_server_port => "xxxx"
zabbix_value => "xxxx"
}</code>Key fields:
zabbix_host : Zabbix host name (required).
zabbix_key : Item key in Zabbix (required).
zabbix_server_host : IP or hostname of the Zabbix server (default
localhost).
zabbix_server_port : Port of the Zabbix server (default
10051).
zabbix_value : Field whose value is sent to the Zabbix item (default
message).
4. Integrating Logstash with Zabbix
Typical workflow: Logstash reads log files, filters for error keywords, and sends matching events to Zabbix, which then generates alerts.
4.1 Logstash pipeline configuration
Example
file_to_zabbix.conf:
<code>input {
file {
path => "/var/log/secure"
type => "system"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:message_timestamp} %{SYSLOGHOST:hostname} %{DATA:message_program}(?:\[%{POSINT:message_pid}\])?: %{GREEDYDATA:message_content}" }
}
mutate {
add_field => ["[zabbix_key]","oslogs"]
add_field => ["[zabbix_host]","Zabbix server"]
remove_field => ["@version","message"]
}
date {
match => ["message_timestamp","MMM d HH:mm:ss","MMM dd HH:mm:ss","ISO8601"]
}
}
output {
elasticsearch {
index => "oslogs-%{+YYYY.MM.dd}"
hosts => ["192.168.73.133:9200"]
user => "elastic"
password => "Goldwind@2019"
sniffing => false
}
if [message_content] =~ /(ERR|error|ERROR|Failed)/ {
zabbix {
zabbix_host => "[zabbix_host]"
zabbix_key => "[zabbix_key]"
zabbix_server_host => "192.168.73.133"
zabbix_server_port => "10051"
zabbix_value => "message_content"
}
}
#stdout { codec => rubydebug }
}</code>Start Logstash with:
<code>[root@logstashserver ~]# cd /usr/local/logstash
[root@logstashserver logstash]# nohup bin/logstash -f config/file_to_zabbix.conf --path.data /tmp/ &</code>4.2 Zabbix side configuration
Create a template named logstash-output-zabbix in Zabbix (Configuration → Templates → Create Template).
Create an application group under the template.
Create an item that receives the log content.
Link the template to the monitored host (e.g., 192.168.73.135) via Configuration → Hosts → select host → Templates → Add.
Generate a trigger that fires when the received data length is greater than 0.
Test the setup by causing a failed login on the monitored host; Logstash filters the
Failedkeyword and sends the log to Zabbix, which then triggers an alert (e.g., via DingTalk).
In Kibana you can also view the original logs.
Summary
The architecture remains: Filebeat collects logs, Logstash processes and forwards them to both Elasticsearch/Kibana and Zabbix via the
logstash-output-zabbixplugin. Ensure Filebeat’s source IP matches the Zabbix host IP; otherwise logs won’t be received. A quick test command using
zabbix_sendercan verify the Zabbix key configuration.
<code># Test sending a value to Zabbix from the server
[root@localhost zabbix_sender]# /usr/local/zabbix/bin/zabbix_sender -s 192.168.73.135 -z 192.168.73.133 -k "oslogs" -o 1
info from server: "processed: 1; failed: 0; total: 1; seconds spent: 0.000081"
sent: 1; skipped: 0; total: 1</code>Parameters:
-sspecifies the local agent,
-zthe Zabbix server, and
-kthe item key.
Ops Development Stories
Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.