Master MySQL 8.0 Backup & Recovery: Full Guide with Scripts, Strategies and Best Practices
This comprehensive guide explains why MySQL backup is essential, defines RPO/RTO, compares logical and physical tools, provides ready‑to‑run scripts for full, incremental, binlog, encryption and compression backups, details step‑by‑step recovery procedures, troubleshooting tips and monitoring recommendations for reliable database protection.
Background
Database backup is essential to prevent data loss from hardware failures, software bugs, or human error. A 2024 incident where a major cloud provider lost customer data due to a backup defect highlights the need for a robust backup strategy.
Backup Importance and Risk Assessment
RPO and RTO
RPO (Recovery Point Objective) – maximum acceptable data loss measured in time (e.g., 1 hour). RTO (Recovery Time Objective) – maximum time allowed to restore service (e.g., 4 hours).
Core transaction systems: RPO < 5 min, RTO < 30 min
User data (accounts, orders): RPO < 1 hour, RTO < 4 hours
Log data: RPO < 24 hours, RTO < 24 hours
Archived historical data: RPO < 1 week, RTO < 1 week
Typical Failure Scenarios
# Human error risk list
# Scenario 1: Accidental table drop – HIGH risk, requires backup + binlog replay
# Scenario 2: Malicious TRUNCATE – HIGH risk, requires binlog
# Scenario 3: DROP DATABASE – DISASTER, full physical backup needed
# Scenario 4: Upgrade failure – HIGH risk, physical backup helps
# Scenario 5: Master‑slave switch inconsistency – MEDIUM risk, re‑sync from masterBackup Strategy Decision Tree
Data loss tolerance?
├─ Extremely low (RPO < 5 min) → xtrabackup incremental every 5‑15 min + real‑time binlog
├─ Low (RPO < 1 hour) → xtrabackup incremental hourly + daily full
├─ Medium (RPO 1‑24 h) → daily mysqldump full + binlog archive
└─ Loose (RPO > 24 h) → daily mysqldump full onlyBackup Method Comparison
Logical backup (mysqldump, mydumper)
Pros: works across MySQL versions, human‑readable SQL, flexible restore.
Cons: slower, cannot guarantee consistency without --single-transaction, larger files.
Physical backup (xtrabackup, mysqlbackup)
Pros: fast backup/restore, maintains InnoDB consistency via FTWRL.
Cons: larger files, version‑locked, harder to restore to a different OS.
Tool Feature Matrix
mysqldump – logical, slow, needs --single-transaction, suitable for < 10 GB.
mydumper – logical, multi‑threaded, medium speed, supports transactions, good for medium‑large DBs.
xtrabackup – physical, fast, supports incremental, ideal for large production clusters.
mysqlbackup – physical, fast, Enterprise‑only, for MySQL Enterprise Edition.
Backup Scripts
mysqldump Full Backup
#!/bin/bash
# Full logical backup with consistency options
mysqldump -u root -p -h localhost \
--single-transaction \
--routines --triggers --events \
--master-data=2 \
--flush-logs \
mydb > /backup/mydb_$(date +%Y%m%d).sql
gzip /backup/mydb_$(date +%Y%m%d).sqlxtrabackup Full Backup
#!/bin/bash
BACKUP_DIR="/backup/xtrabackup/full"
MYSQL_USER="backup_user"
MYSQL_PASSWORD="BackupPass2026!"
mkdir -p "$BACKUP_DIR"
xtrabackup --user="$MYSQL_USER" \
--password="$MYSQL_PASSWORD" \
--backup \
--target-dir="$BACKUP_DIR/$(date +%Y%m%d_%H%M%S)" \
--datadir=/var/lib/mysql \
--parallel=4
# Prepare for restore
xtrabackup --prepare --target-dir="$BACKUP_DIR/$(date +%Y%m%d_%H%M%S)"mydumper Parallel Backup
# Install from source if needed
# git clone https://github.com/mydumper/mydumper.git
# make && make install
mydumper -u root -p 'MyPassword2026!' -h localhost -B mydb \
-o /backup/mydb_$(date +%Y%m%d) -t 8 -v 3
# Restore with myloader
myloader -u root -p 'MyPassword2026!' -h localhost -d /backup/mydb_$(date +%Y%m%d) -t 8 -B mydbIncremental Backup Script (xtrabackup)
BASE=$(ls -td /backup/xtrabackup/full/* | head -1)
INCR_DIR="/backup/xtrabackup/incr/$(date +%Y%m%d_%H%M%S)"
xtrabackup --user=backup_user --password=BackupPass2026! \
--backup --target-dir="$INCR_DIR" \
--incremental-basedir="$BASE" --datadir=/var/lib/mysqlBinlog‑Based Incremental Backup
# binlog_backup.sh – runs every 15 min via cron
MYSQL_USER="backup_admin"
MYSQL_PASSWORD="SecureBackupPass2026!"
BACKUP_DIR="/backup/binlog"
mkdir -p "$BACKUP_DIR"
mysql -u "$MYSQL_USER" -p"$MYSQL_PASSWORD" -e "FLUSH BINARY LOGS;"
cp -n /var/lib/mysql/mysql-bin.* "$BACKUP_DIR/"
find "$BACKUP_DIR" -name "mysql-bin.*" ! -name "*.gz" -mtime +1 -exec gzip {} \;
find "$BACKUP_DIR" -name "*.gz" -mtime +7 -deleteBackup Encryption & Compression
GPG Symmetric Encryption
# encrypt_backup_gpg.sh
BACKUP_FILE="$1"
PASS_FILE=$(mktemp)
openssl rand -base64 32 > "$PASS_FILE"
gpg --batch --yes --symmetric --passphrase-file "$PASS_FILE" \
--cipher-algo AES256 -o "${BACKUP_FILE}.gpg" "$BACKUP_FILE"
gpg --batch --yes --encrypt --recipient [email protected] \
-o "${PASS_FILE}.gpg" "$PASS_FILE"
rm -f "$PASS_FILE"
cat "${PASS_FILE}.gpg" "${BACKUP_FILE}.gpg" > "${BACKUP_FILE}.gpg"
rm -f "${PASS_FILE}.gpg"OpenSSL AES‑256‑CBC Encryption
# encrypt_backup_openssl.sh
BACKUP_FILE="$1"
ENCRYPTED_FILE="${BACKUP_FILE}.enc"
PASSWORD=$(openssl rand -base64 32)
openssl enc -aes-256-cbc -salt -pbkdf2 -in "$BACKUP_FILE" \
-out "$ENCRYPTED_FILE" -pass pass:"$PASSWORD"
echo "$PASSWORD" > "${ENCRYPTED_FILE}.key"
cat "${ENCRYPTED_FILE}.key" "$ENCRYPTED_FILE" > "${ENCRYPTED_FILE}.combined"
mv "${ENCRYPTED_FILE}.combined" "$ENCRYPTED_FILE"
rm -f "${ENCRYPTED_FILE}.key"Compression Comparison Script
# compare_compression.sh – usage: ./compare_compression.sh /path/to/file.sql
TEST_FILE="$1"
RESULTS="/tmp/compression_test.txt"
echo "Compression algorithm comparison" > "$RESULTS"
gzip -c "$TEST_FILE" > /tmp/test_gzip.sql.gz
echo "gzip -6: $(du -h /tmp/test_gzip.sql.gz | cut -f1)" >> "$RESULTS"
if command -v pigz >/dev/null; then
pigz -c "$TEST_FILE" > /tmp/test_pigz.sql.gz
echo "pigz -6: $(du -h /tmp/test_pigz.sql.gz | cut -f1)" >> "$RESULTS"
fi
if command -v zstd >/dev/null; then
zstd -c "$TEST_FILE" > /tmp/test_zstd.sql.zst
echo "zstd -3: $(du -h /tmp/test_zstd.sql.zst | cut -f1)" >> "$RESULTS"
fi
xz -c "$TEST_FILE" > /tmp/test_xz.sql.xz
echo "xz -6: $(du -h /tmp/test_xz.sql.xz | cut -f1)" >> "$RESULTS"
cat "$RESULTS"Recovery Procedures
mysqldump Restore
#!/bin/bash
BACKUP_FILE="$1"
TARGET_DB="$2"
if [[ "$BACKUP_FILE" == *.gz ]]; then
DECOMPRESSED=$(mktemp)
gunzip -c "$BACKUP_FILE" > "$DECOMPRESSED"
BACKUP_FILE="$DECOMPRESSED"
fi
if [[ -n "$TARGET_DB" ]]; then
mysql -u root -e "DROP DATABASE IF EXISTS $TARGET_DB; CREATE DATABASE $TARGET_DB CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;"
mysql -u root "$TARGET_DB" < "$BACKUP_FILE"
else
mysql -u root < "$BACKUP_FILE"
fi
if [[ $? -eq 0 ]]; then
echo "Data restore succeeded"
else
echo "ERROR: Data restore failed"
exit 1
fi
[ -n "$DECOMPRESSED" ] && rm -f "$DECOMPRESSED"xtrabackup Full Restore
#!/bin/bash
set -euo pipefail
BACKUP_DIR="$1"
MYSQL_DATA_DIR="/var/lib/mysql"
systemctl stop mysql || systemctl stop mysqld || service mysql stop
xtrabackup --prepare --target-dir="$BACKUP_DIR" --no-version-check
mv "$MYSQL_DATA_DIR" "${MYSQL_DATA_DIR}.bak.$(date +%Y%m%d%H%M%S)" 2>/dev/null || true
mkdir -p "$MYSQL_DATA_DIR"
xtrabackup --copy-back --target-dir="$BACKUP_DIR" --datadir="$MYSQL_DATA_DIR" --no-version-check
chown -R mysql:mysql "$MYSQL_DATA_DIR"
chmod -R 750 "$MYSQL_DATA_DIR"
systemctl start mysql || systemctl start mysqld || service mysql start
sleep 5
if mysql -u root -e "SELECT 1" >/dev/null 2>&1; then
echo "MySQL started, restore verified"
else
echo "WARNING: MySQL start verification failed"
fixtrabackup Incremental Restore
#!/bin/bash
set -euo pipefail
FULL_BACKUP="$1"
INC_BACKUPS="$2" # comma‑separated list
MYSQL_DATA_DIR="/var/lib/mysql"
# Prepare full backup
xtrabackup --prepare --target-dir="$FULL_BACKUP" --no-version-check
# Apply each incremental backup
OLD_IFS="$IFS"
IFS=','
for inc in $INC_BACKUPS; do
xtrabackup --prepare --target-dir="$FULL_BACKUP" \
--incremental-dir="$inc" --no-version-check
done
IFS="$OLD_IFS"
# Stop MySQL and replace data directory
systemctl stop mysql || systemctl stop mysqld || service mysql stop
mv "$MYSQL_DATA_DIR" "${MYSQL_DATA_DIR}.bak.$(date +%Y%m%d%H%M%S)" 2>/dev/null || true
mkdir -p "$MYSQL_DATA_DIR"
xtrabackup --copy-back --target-dir="$FULL_BACKUP" --datadir="$MYSQL_DATA_DIR" --no-version-check
chown -R mysql:mysql "$MYSQL_DATA_DIR"
chmod -R 750 "$MYSQL_DATA_DIR"
systemctl start mysql || systemctl start mysqld || service mysql startTroubleshooting & Common Issues
Backup file too large – use streaming compression (e.g., mysqldump … | gzip > backup.sql.gz) or split by database.
Access denied (error 1045) – verify backup user privileges; escape special characters or use --defaults-extra-file.
Inconsistent backup – ensure --single-transaction for InnoDB or lock all tables for mixed engines.
xtrabackup cannot list *.ibd files – fix filesystem permissions or SELinux/AppArmor restrictions.
Version mismatch errors – use a xtrabackup version compatible with the MySQL server or perform logical dump migration.
Backup Management Best Practices
Maintain a layered strategy: daily incremental, weekly full, continuous binlog archiving.
Automate integrity checks (e.g., gzip -t for compressed dumps) and schedule regular restore drills.
Encrypt backups at rest and in transit; store encryption keys securely (KMS, HSM).
Implement monitoring and alerting for backup success/failure, size anomalies, and missing binlog updates.
Enforce retention policies (e.g., keep full backups 30 days, incrementals 7 days) and purge old data safely.
Sample Prometheus Alert Rules
groups:
- name: MySQLBackupAlerts
rules:
- alert: MySQLBackupMissing
expr: (time() - file_mtime("/backup/mysql/$(date +%Y-%m-%d)/metadata.txt")) > 86400
for: 1h
labels:
severity: critical
annotations:
summary: "MySQL full backup missing"
description: "No full backup created in the last 24 hours."
- alert: MySQLBackupFailing
expr: increase(mysql_backup_errors_total[1h]) > 0
for: 5m
labels:
severity: warning
annotations:
summary: "MySQL backup job failed"
description: "Backup task reported errors in the past hour."Conclusion
Effective MySQL backup and recovery requires reliability, timeliness, security, recoverability, and continuous monitoring. Regularly verify backup integrity, automate restore drills, and align RPO/RTO with business requirements. Treat backup health as a first‑class KPI to avoid painful data loss incidents.
MaGe Linux Operations
Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
