Operations 9 min read

How to Prevent Accidental Database Deletion: Backup Strategies and Automation

This guide explains why accidental database deletion is a critical risk, outlines permission and audit measures, and provides detailed MySQL and file backup solutions—including encryption, multi‑site storage, cron scheduling, and ready‑to‑run Bash scripts—to ensure data safety and system stability.

Wukong Talks Architecture
Wukong Talks Architecture
Wukong Talks Architecture
How to Prevent Accidental Database Deletion: Backup Strategies and Automation

In development and operations, "deleting the database and running away" refers to the severe situation where production data is unintentionally removed, causing business interruption and potential financial loss.

To prevent such incidents, the article recommends strict permission control, comprehensive audit logging, a robust backup strategy with off‑site copies, multi‑region storage, least‑privilege principles, continuous monitoring, and regular security training, emphasizing that backup is the most essential safeguard.

The environment used in the examples consists of an Ubuntu host running Docker containers for MySQL, a FastDFS distributed file system, and the expect tool for automating interactive tasks.

Backup plan for MySQL databases includes using mysqldump to export all data, compressing the dump with tar , encrypting the archive with openssl , deleting expired backups, and transferring the encrypted file to remote servers via scp or rsync .

Backup plan for data files involves compressing and encrypting the files, splitting them into volume files, merging when needed, performing remote copies, and cleaning up old backups.

Automation is achieved by adding a daily cron job (02:10) that runs the backup scripts and appends output to /home/passjava/backup/cron_log.txt :

crontab -uroot -e
10 2 * * * bash /home/passjava/backup/your_script >> /home/passjava/backup/cron_log.txt

Database backup script (Bash) :

#!/bin/bash
# Set MySQL credentials
mysql_user="root"
mysql_password="xxx"
mysql_host="database_server_ip"
mysql_port="3306"
# Backup destination
backup_location=/home/passjava/backup/mysql/passjava_web
expire_backup_delete="ON"
expire_days=7
backup_time=$(date +%Y-%m-%d-%H-%M-%S)
# Get MySQL container ID
mysqlContainerName=$(sudo docker ps -q --filter="name=mysql")
# Dump database inside Docker container
sudo docker exec $mysqlContainerName mysqldump passjava_web -u$mysql_user -p$mysql_password > $backup_location/$backup_time-backup-mysql-passjava_web.sql
# Compress and encrypt
tar -czvf - $backup_location/$backup_time-backup-mysql-passjava_web.sql | openssl des3 -salt -k passjava123456 -out $backup_location/$backup_time-backup-mysql-passjava_web.sql.tar.gz
# Delete expired backups
if [ "$expire_backup_delete" == "ON" -a "$backup_location" != "" ]; then
    find $backup_location/ -type f -mtime +$expire_days | xargs rm -rf
    echo "Expired backup data delete complete!"
fi
# Remote backup via expect
expect -c "
    spawn scp -r $backup_location/$backup_time-backup-mysql-passjava_web.sql.tar.gz passjava@remote1:/home/passjava/backup/mysql/passjava_web
    expect {\"*assword\" {set timeout 300; send \"passjava\r\"; exp_continue;} \"yes/no\" {send \"yes\r\";}}
    spawn scp -r $backup_location/$backup_time-backup-mysql-passjava_web.sql.tar.gz passjava@remote2:/home/passjava/backup/mysql/passjava_web
    expect {\"*assword\" {set timeout 300; send \"passjava\r\"; exp_continue;} \"yes/no\" {send \"yes\r\";}}
    expect eof"

echo "Remote backup completed"
# Clean up local dump file
rm -f $backup_location/$backup_time-backup-mysql-passjava_web.sql

File backup script (Bash) for FastDFS or Redis data :

#!/bin/bash
# Backup destination
backup_location=/home/passjava/backup/fdfs/data
expire_backup_delete="ON"
expire_days=7
backup_time=$(date +%Y-%m-%d-%H-%M-%S)
# Compress, encrypt and split into 200M volumes
tar -czvf - /home/passjava/fdfs | openssl des3 -salt -k passjava123456 | split -b 200m -d - $backup_location/$backup_time-fdfs-data.tar.gz
# Delete expired backups
if [ "$expire_backup_delete" == "ON" -a "$backup_location" != "" ]; then
    find $backup_location/ -type f -mtime +$expire_days | xargs rm -rf
    echo "Expired backup data delete complete!"
fi
# Merge split files
cat $backup_location/$backup_time-fdfs-data.tar.gz* > $backup_location/$backup_time-fdfs-data-all.tar.gz
# Remote copy via expect
expect -c "
    spawn scp -r $backup_location/$backup_time-fdfs-data-all.tar.gz [email protected]:/home/passjava/backup/fdfs/data
    expect {\"*assword\" {set timeout 300; send \"passjava\r\"; exp_continue;} \"yes/no\" {send \"yes\r\";}}
    expect eof"

echo "Remote backup of fdfs completed"
# Clean up local split files
rm -f $backup_location/$backup_time-fdfs-data.tar.gz*

The conclusion reiterates that preventing accidental data loss requires proper permission management, regular encrypted backups, and automated scripts to maintain data integrity and system reliability.

DockerMySQLBackupCrondata protectionbash script
Wukong Talks Architecture
Written by

Wukong Talks Architecture

Explaining distributed systems and architecture through stories. Author of the "JVM Performance Tuning in Practice" column, open-source author of "Spring Cloud in Practice PassJava", and independently developed a PMP practice quiz mini-program.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.