15 Common MySQL Pitfalls and How to Avoid Them
This article outlines fifteen typical MySQL pitfalls—including missing WHERE clauses, lack of indexes, improper NULL handling, wrong data types, deep pagination, missing EXPLAIN analysis, charset misconfiguration, SQL injection risks, transaction misuse, collation issues, overusing SELECT *, index loss, frequent schema changes, missing backups, and unarchived historical data—and provides concrete examples and best‑practice solutions to improve performance, reliability, and security.
MySQL is widely used because it is free, open‑source, and performs well, but developers often encounter hidden problems that can severely affect performance, data integrity, and security. This article presents fifteen common MySQL pitfalls and practical ways to avoid them.
1. Query Without WHERE Clause
Fetching all rows without a WHERE condition may work on small tables but can cause OOM errors and slow performance on large datasets.
SELECT * FROM users;Use a specific filter to limit the result set.
SELECT * FROM users WHERE code = '1001';2. Missing Indexes
Tables without proper indexes run fast initially, but as data grows queries become slow.
SELECT * FROM orders WHERE customer_id = 123;Add an index on the filtering column:
CREATE INDEX idx_customer ON orders(customer_id);3. Ignoring NULL Values
Counting a column without handling NULL only counts non‑NULL rows.
SELECT COUNT(name) FROM users;Use COUNT(*) to count all rows.
SELECT COUNT(*) FROM users;4. Wrong Data Types
Using overly large types such as VARCHAR(255) for small enumerations wastes space and hurts performance.
CREATE TABLE products (
id INT,
status VARCHAR(255)
);Replace with a compact type:
CREATE TABLE products (
id INT,
status TINYINT(1) DEFAULT '0' COMMENT '0: inactive, 1: active'
);5. Deep Pagination
Using LIMIT offset, count on very high offsets forces the engine to scan many rows.
SELECT * FROM users LIMIT 0,10;Solutions include remembering the last ID, sub‑queries, or inner joins to avoid scanning previous pages.
5.1 Remember Last ID
SELECT id, name FROM orders WHERE id > 1000000 LIMIT 100000,10;5.2 Sub‑query with Covering Index
SELECT * FROM `order` WHERE id IN (
SELECT id FROM (
SELECT id FROM `order` WHERE time > '2024-08-11' LIMIT 100000,10
) t
);5.3 Inner Join
SELECT * FROM `order` o1
INNER JOIN (
SELECT id FROM `order` WHERE create_time > '2024-08-11' LIMIT 100000,10
) o2 ON o1.id = o2.id;6. Not Using EXPLAIN
Running slow queries without EXPLAIN hides the execution plan, making optimization blind.
EXPLAIN SELECT * FROM users WHERE email = '[email protected]';7. Improper Charset Settings
Setting the database charset to utf8 cannot store certain emojis and may cause garbled characters.
CREATE TABLE messages (
id INT,
content TEXT
) CHARACTER SET utf8mb4;8. SQL Injection Risks
Concatenating user input directly into SQL strings is dangerous.
String query = "SELECT * FROM users WHERE email = '" + userInput + "'";Prefer prepared statements:
PreparedStatement stmt = connection.prepareStatement("SELECT * FROM users WHERE email = ?");
stmt.setString(1, userInput);9. Transaction Issues
Updating multiple tables without a transaction can leave data inconsistent.
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
UPDATE accounts SET balance = balance + 100 WHERE id = 2;Wrap the statements in a transaction:
START TRANSACTION;
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
UPDATE accounts SET balance = balance + 100 WHERE id = 2;
COMMIT;In Spring, use @Transactional or TransactionTemplate .
10. Collation Issues
Using a case‑insensitive collation (e.g., utf8mb4_general_ci ) while the application performs case‑sensitive string comparisons can cause unexpected results.
11. Overusing SELECT *
Fetching all columns wastes bandwidth and CPU.
SELECT * FROM orders;Only select needed columns:
SELECT id, total FROM orders;12. Index Invalidations
Indexes may become ineffective due to query patterns; use EXPLAIN to diagnose.
13. Frequent Schema Changes
Altering tables or bulk updates during peak traffic can lock tables and degrade performance. Schedule such operations during low‑traffic windows or use online tools like Percona Toolkit or gh‑ost.
14. No Regular Backups
Data loss from accidental deletions can be mitigated by periodic mysqldump backups and automated restore procedures.
mysqldump -u root -p database_name > backup.sql15. Forgetting to Archive Historical Data
Large amounts of old data slow queries; archive data older than a certain period to a separate database or table.
By addressing these pitfalls, developers can significantly improve MySQL query performance, maintain data integrity, and enhance overall system security.
IT Services Circle
Delivering cutting-edge internet insights and practical learning resources. We're a passionate and principled IT media platform.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.