How to Build a Centralized Rsyslog Server with ELK for Network Log Management
This guide walks through deploying a centralized Rsyslog server on CentOS, configuring SELinux, firewall, rsyslog, Filebeat, Logstash, and Kibana to collect, process, and visualize logs from various network devices, addressing common sysadmin challenges.
Introduction
As the number of servers and network devices in a data center grows, log management and querying become painful for system administrators.
Common problems faced by sysadmins
Cannot log into every server/device to view logs during routine maintenance.
Network devices have limited storage, cannot keep long‑term logs, yet issues may stem from old events.
Attackers often delete local logs to hide intrusion traces.
Monitoring tools like Zabbix cannot replace log management for events such as logins or cron jobs.
Therefore, deploying a centralized Rsyslog server is essential in the current network environment.
Advantages of Rsyslog
Most network devices support remote syslog; configuration usually only requires IP and port (default 514).
Linux servers need only a single line in
/etc/rsyslog.confto forward logs; deployment is simple.
Deployment Architecture
Rsyslog Configuration
<code>系统环境及软件版本:CentOS Linux release 7.5.1804 (Core)
Elasticserch-6.8.4
Kibana-6.8.4
Logstash-6.8.4
Filebeat-6.8.4
Rsyslog-8.24.0</code>Disable SELinux
<code># setenforce 0
# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config</code>Firewall configuration
<code>firewall-cmd --add-service=syslog --permanent
firewall-cmd --reload</code>Check Rsyslog installation
CentOS 7 installs rsyslog by default.
<code>[root@ZABBIX-Server ~]# rpm -qa |grep rsyslog
rsyslog-8.24.0-16.el7.x86_64</code>Edit /etc/rsyslog.conf
<code>$ModLoad imudp
$UDPServerRun 514
$ModLoad imtcp
$InputTCPServerRun 514
$WorkDirectory /var/lib/rsyslog
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
$IncludeConfig /etc/rsyslog.d/*.conf
$OmitLocalLogging on
$IMJournalStateFile imjournal.state
*.info;mail.none;authpriv.none;cron.none;local6.none;local5.none;local4.none /var/log/messages
$template h3c,"/mnt/h3c/%FROMHOST-IP%.log"
local6.* ?h3c
$template huawei,"/mnt/huawei/%FROMHOST-IP%.log"
local5.* ?huawei
$template cisco,"/mnt/cisco/%FROMHOST-IP%.log"
local4.* ?cisco</code><code>$ModLoad imudp # immark module, supports TCP
$ModLoad imudp # imupd module, supports UDP
$InputTCPServerRun 514
$UDPServerRun 514 # allow UDP/TCP on port 514</code>Note:
<code>*.info;mail.none;authpriv.none;cron.none;local6.none;local5.none;local4.none /var/log/messages
By default local6.none;local5.none;local4.none are not added, so network logs are also written to /var/log/messages.</code>Check Rsyslog service
Restart Rsyslog
<code>systemctl restart rsyslog.service</code>Log storage directories
Network devices forward logs to the syslog server; different vendors map to different local facilities:
<code>/mnt/huawei --- local6
/mnt/h3c --- local5
/mnt/cisco --- local4</code>Network device configuration
<code>Huawei:
info-center loghost source Vlanif99
info-center loghost 192.168.99.50 facility local5
H3C:
info-center loghost source Vlan-interface99
info-center loghost 192.168.99.50 facility local6
CISCO:
(config)#logging on
(config)#logging 192.168.99.50
(config)#logging facility local4
(config)#logging source-interface e0
Ruijie:
logging buffered warnings
logging source interface VLAN 99
logging facility local6
logging server 192.168.99.50</code>Note: 192.168.99.50 is the Rsyslog server IP.
Edit Filebeat configuration
Collect logs under
/mnt/*and ship them to Logstash.
<code>filebeat.inputs:
- type: log
enabled: true
paths:
- /mnt/huawei/*
tags: ["huawei"]
include_lines: ['Failed','failed','error','ERROR','\bDOWN\b','\bdown\b','\bUP\b','\bup\b']
drop_fields:
fields: ["beat","input_type","source","offset","prospector"]
- type: log
paths:
- /mnt/h3c/*
tags: ["h3c"]
include_lines: ['Failed','failed','error','ERROR','\bDOWN\b','\bdown\b','\bUP\b','\bup\b']
drop_fields:
fields: ["beat","input_type","source","offset","prospector"]
setup.template.settings:
index.number_of_shards: 3
output.logstash:
hosts: ["192.168.99.185:5044"]
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~</code>Edit Logstash configuration
Parse logs from Filebeat based on tags and output to Elasticsearch.
<code>input {
beats {
port => 5044
}
}
filter {
if "huawei" in [tags] {
grok{
match => {"message" => "%{SYSLOGTIMESTAMP:time} %{DATA:hostname} %{GREEDYDATA:info}"}
}
}
else if "h3c" in [tags] {
grok{
match => {"message" => "%{SYSLOGTIMESTAMP:time} %{YEAR:year} %{DATA:hostname} %{GREEDYDATA:info}"}
}
}
mutate {
remove_field => ["message","time","year","offset","tags","path","host","@version","[log]","[prospector]","[beat]","[input][type]","[source]"]
}
}
output{
stdout {codec => rubydebug}
elasticsearch {
index => "networklogs-%{+YYYY.MM.dd}"
hosts => ["192.168.99.185:9200"]
sniffing => false
}
}</code>Visualization in Kibana
Create an index pattern that matches the network‑device log indices.
Create a data table
Kibana tables can be exported as CSV files.
Create a pie chart
Feel free to discuss and share improvements.
Ops Development Stories
Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.