How to Use ELK Stack for Efficient MySQL Slow Query Log Collection
This guide explains how to collect, parse, and visualize MySQL slow query logs using the ELK stack—Filebeat, Logstash, Elasticsearch, and Kibana—covering configuration details, log formatting, and practical querying techniques.
Collecting and processing MySQL slow query logs can be time‑consuming; this article shows how to improve efficiency by using ELK for slow‑log collection.
ELK Introduction
ELK originally referred to the three open‑source projects Elasticsearch (ES), Logstash, and Kibana. After being acquired and expanded with X‑Pack, Beats, and other components, the suite is now called the Elastic Stack, but the name ELK remains common.
Our MySQL slow‑log collection pipeline is:
Install Filebeat on the MySQL server to read the slow‑log.
Filebeat filters the log and forwards it to a Kafka cluster.
Logstash consumes Kafka messages, parses fields, and stores the result as JSON in Elasticsearch.
Kibana visualizes the data on a web dashboard.
Slow‑Log Variants
MySQL versions 5.5, 5.6, and 5.7 have slightly different slow‑log formats.
Version‑specific examples:
Key differences include time‑field formats, the presence of an Id field (missing in 5.5), and occasional missing
use dblines.
Processing Approach
Assemble multi‑line log entries into a single record.
Discard or ignore the optional
# Timeline and rely on
SET timestampfor execution time.
Combine the line starting with
# User@Hostand the final SQL statement into one complete log entry.
Since the log may not contain the database name, embed it in the MySQL account name (e.g., projectName_dbName ) to infer the DB.
Inject the host IP via Filebeat’s
namefield so the
beat.nameattribute identifies the source server.
Filebeat Configuration
The full Filebeat configuration:
Important parameters:
input_type : log or stdin.
paths : file path pattern for slow‑log files (e.g.,
/data/*.log).
exclude_lines : filter out lines beginning with
# Time.
multiline.pattern : regex matching lines that start with
# Timeor
# Userfor multiline grouping.
multiline.negate and multiline.match : control how lines are combined.
tail_files : true to start reading from the end of existing files.
name : set to the server IP to identify the host later.
output.kafka : destination Kafka cluster and topic.
Kafka Message Format
{"@timestamp":"2018-08-07T09:36:00.140Z","beat":{"hostname":"db-7eb166d3","name":"10.63.144.71","version":"5.4.0"},"input_type":"log","message":"# User@Host: select[select] @ [10.63.144.16] Id: 23460596\n# Query_time: 0.155956 Lock_time: 0.000079 Rows_sent: 112 Rows_examined: 366458\nSET timestamp=1533634557;\nSELECT DISTINCT(uid) FROM common_member WHERE hideforum=-1 AND uid != 0;","offset":1753219021,"source":"/data/slow/mysql_slow.log","type":"log"}Logstash Configuration
The full Logstash configuration:
Key sections:
input : Kafka cluster address and topic.
filter : Use grok patterns to split the
messagefield into structured fields such as User, Host, Query_time, Lock_time, and timestamp. The
datefilter converts
timestamp_mysqlto the event timestamp for proper sorting in Kibana.
output : Send parsed events to Elasticsearch, with daily index rotation.
Kibana Visualization
In Kibana, create an index pattern
mysql-slowlog-*and select
timestampas the time field.
Use the Discover page to view slow‑log volume over time and filter by fields. For example, to find queries longer than 2 seconds, enter
query_time: > 2in the search bar.
Click the arrow next to a log entry to view detailed information.
You can also build dashboards to show aggregate statistics, such as the top 10 slow SQL statements.
Conclusion
Don’t be intimidated; starting the collection process is half the battle.
The article details MySQL slow‑log collection; after collection, DBAs manually inspect logs in Kibana, discuss optimizations with developers, and may later automate alerts.
The ELK ecosystem (including X‑Pack) already provides alerting capabilities; explore and contribute.
Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.