Operations 10 min read

How to Use ELK Stack for Efficient MySQL Slow Query Log Collection

This guide explains how to collect, parse, and visualize MySQL slow query logs using the ELK stack—Filebeat, Logstash, Elasticsearch, and Kibana—covering configuration details, log formatting, and practical querying techniques.

Efficient Ops
Efficient Ops
Efficient Ops
How to Use ELK Stack for Efficient MySQL Slow Query Log Collection
Collecting and processing MySQL slow query logs can be time‑consuming; this article shows how to improve efficiency by using ELK for slow‑log collection.

ELK Introduction

ELK originally referred to the three open‑source projects Elasticsearch (ES), Logstash, and Kibana. After being acquired and expanded with X‑Pack, Beats, and other components, the suite is now called the Elastic Stack, but the name ELK remains common.

Our MySQL slow‑log collection pipeline is:

Install Filebeat on the MySQL server to read the slow‑log.

Filebeat filters the log and forwards it to a Kafka cluster.

Logstash consumes Kafka messages, parses fields, and stores the result as JSON in Elasticsearch.

Kibana visualizes the data on a web dashboard.

Slow‑Log Variants

MySQL versions 5.5, 5.6, and 5.7 have slightly different slow‑log formats.

Version‑specific examples:

Key differences include time‑field formats, the presence of an Id field (missing in 5.5), and occasional missing

use db

lines.

Processing Approach

Assemble multi‑line log entries into a single record.

Discard or ignore the optional

# Time

line and rely on

SET timestamp

for execution time.

Combine the line starting with

# User@Host

and the final SQL statement into one complete log entry.

Since the log may not contain the database name, embed it in the MySQL account name (e.g., projectName_dbName ) to infer the DB.

Inject the host IP via Filebeat’s

name

field so the

beat.name

attribute identifies the source server.

Filebeat Configuration

The full Filebeat configuration:

Important parameters:

input_type : log or stdin.

paths : file path pattern for slow‑log files (e.g.,

/data/*.log

).

exclude_lines : filter out lines beginning with

# Time

.

multiline.pattern : regex matching lines that start with

# Time

or

# User

for multiline grouping.

multiline.negate and multiline.match : control how lines are combined.

tail_files : true to start reading from the end of existing files.

name : set to the server IP to identify the host later.

output.kafka : destination Kafka cluster and topic.

Kafka Message Format

{"@timestamp":"2018-08-07T09:36:00.140Z","beat":{"hostname":"db-7eb166d3","name":"10.63.144.71","version":"5.4.0"},"input_type":"log","message":"# User@Host: select[select] @  [10.63.144.16]  Id: 23460596\n# Query_time: 0.155956  Lock_time: 0.000079 Rows_sent: 112  Rows_examined: 366458\nSET timestamp=1533634557;\nSELECT DISTINCT(uid) FROM common_member WHERE hideforum=-1 AND uid != 0;","offset":1753219021,"source":"/data/slow/mysql_slow.log","type":"log"}

Logstash Configuration

The full Logstash configuration:

Key sections:

input : Kafka cluster address and topic.

filter : Use grok patterns to split the

message

field into structured fields such as User, Host, Query_time, Lock_time, and timestamp. The

date

filter converts

timestamp_mysql

to the event timestamp for proper sorting in Kibana.

output : Send parsed events to Elasticsearch, with daily index rotation.

Kibana Visualization

In Kibana, create an index pattern

mysql-slowlog-*

and select

timestamp

as the time field.

Use the Discover page to view slow‑log volume over time and filter by fields. For example, to find queries longer than 2 seconds, enter

query_time: > 2

in the search bar.

Click the arrow next to a log entry to view detailed information.

You can also build dashboards to show aggregate statistics, such as the top 10 slow SQL statements.

Conclusion

Don’t be intimidated; starting the collection process is half the battle.

The article details MySQL slow‑log collection; after collection, DBAs manually inspect logs in Kibana, discuss optimizations with developers, and may later automate alerts.

The ELK ecosystem (including X‑Pack) already provides alerting capabilities; explore and contribute.

OperationsMySQLELKLogstashKibanaFilebeatSlow Query Log
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.