Operations 10 min read

Why Does a Linux Session Timeout Break MySQL Imports? Solutions & Best Practices

The article explains how SSH idle timeout, network drops, or closing the terminal send a SIGHUP signal that aborts MySQL import scripts, and it provides step‑by‑step solutions using nohup, tmux/screen, and systemd along with practical tips for reliable large‑scale data loading.

Advanced AI Application Practice
Advanced AI Application Practice
Advanced AI Application Practice
Why Does a Linux Session Timeout Break MySQL Imports? Solutions & Best Practices

When a long‑running MySQL import such as mysql -u root -p db_name < huge_file.sql is executed over SSH, the import can be aborted if the SSH session ends because the kernel sends a SIGHUP signal to the child process. The default action of SIGHUP is to terminate the process, causing incomplete imports and possible data‑integrity problems.

Root Cause

SSH server or client idle‑timeout closes an idle connection.

Network connectivity loss between the client and the server.

Accidental closure of the terminal window or tab.

All three conditions deliver SIGHUP to the MySQL client, which terminates the import.

Solution Strategy

Detach the import process from the controlling terminal so that SIGHUP is ignored.

Solution 1 – nohup

Run the import with nohup and background it:

# Change to the directory containing the SQL file
cd /path/to/your/sqlfile/

# Run the import, redirecting output and errors to a log file
nohup mysql -u your_username -p'your_password' your_database < huge_file.sql > import.log 2>&1 &
# Safer alternative (password in a MySQL option file)
# nohup mysql --defaults-extra-file=~/.my.cnf your_database < huge_file.sql > import.log 2>&1 &

Key symbols: nohup – makes the process immune to SIGHUP. & – runs the command in the background. > import.log – redirects standard output to a log file. 2>&1 – redirects standard error to the same log file.

Monitor progress:

# Real‑time view of the log
tail -f import.log
# List background jobs
jobs
# Detailed process view
ps aux | grep mysql

Solution 2 – Terminal Multiplexers ( tmux / screen )

Multiplexers create virtual terminals that persist after the SSH connection ends.

Example with tmux (recommended):

# Install tmux if needed
# CentOS/RHEL/Rocky
sudo yum install tmux
# Ubuntu/Debian
sudo apt install tmux

# Create a new session named "mysql-import"
tmux new-session -s mysql-import
# Inside the session, run the import
mysql -u your_username -p'your_password' your_database < /path/to/huge_file.sql
# Detach (Ctrl‑B then D)

# Later, list sessions and re‑attach
 tmux list-sessions
 tmux attach-session -t mysql-import

The import continues even if the SSH link drops; re‑attach to view real‑time output.

Solution 3 – Systemd Service

Define a systemd unit for production or scheduled imports.

[Unit]
Description=MySQL Large Data Import
After=mysqld.service
Requires=mysqld.service

[Service]
Type=simple
User=mysql
Group=mysql
WorkingDirectory=/path/to/your/sqlfile
ExecStart=/usr/bin/mysql --defaults-extra-file=/etc/mysql/import.cnf your_database < huge_file.sql
StandardOutput=journal
StandardError=journal

[Install]
WantedBy=multi-user.target

Enable and start the service:

sudo chmod 644 /etc/systemd/system/mysql-import.service
sudo systemctl daemon-reload
sudo systemctl start mysql-import.service

View live logs:

sudo journalctl -u mysql-import.service -f

Best‑Practice Techniques

Backup before import

mysqldump -u root -p --single-transaction --routines --triggers your_database > backup_before_import.sql

Split extremely large SQL files

# Split by line count (e.g., 10 000 lines per chunk)
split -l 10000 huge_file.sql chunk_
# Import each chunk in a loop

Speed‑up parameters

mysql -u username -p db_name \
  --init-command="SET autocommit=0; SET unique_checks=0; SET foreign_key_checks=0;" \
  < file.sql

After the import, re‑enable the checks to ensure data integrity.

Progress monitoring with pv

# Install pv
sudo yum install pv   # or sudo apt install pv
# Pipe with progress display
pv huge_file.sql | mysql -u username -p db_name

Comparison of Approaches

One‑off import – nohup Pros: simple, no extra tools.

Cons: real‑time monitoring requires checking the log file.

Need real‑time observation – tmux / screen Pros: detach/reattach, live output.

Cons: requires learning basic commands.

Production or scheduled jobs – systemd Pros: integrates with system logs, supports restart policies.

Cons: more complex configuration.

Very large files – split the file then apply any of the above.

Pros: reduces risk, easier recovery.

Cons: requires pre‑processing of the SQL file.

LinuxMySQLdata importsystemdtmuxnohupsession timeout
Advanced AI Application Practice
Written by

Advanced AI Application Practice

Advanced AI Application Practice

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.