Databases 8 min read

Diagnosing MySQL Memory Exhaustion Caused by Slab Cache and Excessive Inodes

The article details a step‑by‑step investigation of a MySQL server that ran out of memory due to massive slab cache consumption by inode and dentry objects, showing how Linux commands, scripts, and cache‑dropping techniques resolved the issue and revealed the root cause of excessive partition tables.

Aikesheng Open Source Community
Aikesheng Open Source Community
Aikesheng Open Source Community
Diagnosing MySQL Memory Exhaustion Caused by Slab Cache and Excessive Inodes

1. Background

During work hours a disk‑space alarm was received; the root partition had less than 16 GB total and was already over 80 % used. The command

du -Sm / --exclude="/data" | sort -k1nr | head -10

took almost a minute on this machine and identified several large log files that could be deleted.

Immediately after, a memory alarm appeared on the same host.

2. Diagnosis

Checking memory usage confirmed it was exhausted. The top command showed the mysqld process consuming about 43 GB; adding other processes the total was around 44 GB. A small Bash script was used to sum the RSS of all processes:

# /bin/bash
for PROC in `ls /proc/|grep "^[0-9]"`
do
  if [ -f /proc/$PROC/statm ]; then
    TEP=`cat /proc/$PROC/statm | awk '{print ($2)}'`
    RSS=`expr $RSS + $TEP`
  fi
done
RSS=`expr $RSS * 4 / 1024 / 1024`
echo $RSS"GB"

The script reported 44 GB of RSS.

Further inspection of /proc/meminfo revealed that the slab cache was consuming roughly 16 GB of reclaimable memory. Running slabtop showed that inode and dentry objects were the main contributors.

cat /proc/slabinfo | awk '{if($3*$4/1024/1024 > 100){print $1,$3*$4/1024/1024 "MB"}}'

The output indicated about 12 GB used by proc_inode_cache and 3.5 GB by dentry .

Since the memory pressure was caused by slab cache, the command

echo 2 > /proc/sys/vm/drop_caches

was executed to reclaim the memory, which immediately resolved the shortage.

3. Root Cause Investigation

To find which directory generated the massive inode/dentry usage, the following loop counted files and sub‑directories under each top‑level directory:

for i in `ls /`; do
  count=`ls -lR /$i | wc -l`
  echo "$i has $count files and dirs"
done

The /proc directory stood out. A deeper scan of its sub‑directories showed that process 15049 (the MySQL server) had millions of files:

for i in `ls /proc`; do
  files=`ls -lR /proc/$i | grep "^-" | wc -l`
  dirs=`ls -lR /proc/$i | grep "^d" | wc -l`
  echo "$i has $files files and $dirs dirs" >> /tmp/count_tmps
done

cat /tmp/count_tmps | sort -k3nr | head -5

The entry 15049 corresponded to the MySQL daemon. Examining its task sub‑directory revealed that each thread opened roughly 85 k file descriptors, all pointing to MySQL partition files.

ls /proc/15049/task/15120/fd | wc -l

Further inspection of information_schema.partitions showed more than 100 partitioned tables, each with thousands of partitions, confirming that the application’s habit of creating many partition tables was the underlying cause.

4. Conclusion

The memory exhaustion was not due to a true OOM situation; the 16 GB of memory was reclaimable slab cache occupied by inode and dentry structures generated by an excessive number of MySQL partition tables. Dropping caches freed the memory, and the root cause was identified as the over‑use of partitioned tables.

LinuxMySQLMemoryInodeDiagnosticsDBASlabCache
Aikesheng Open Source Community
Written by

Aikesheng Open Source Community

The Aikesheng Open Source Community provides stable, enterprise‑grade MySQL open‑source tools and services, releases a premium open‑source component each year (1024), and continuously operates and maintains them.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.