Databases 10 min read

Do Varchar Lengths Really Impact MySQL Storage and Query Performance?

This article experimentally investigates whether the length of VARCHAR columns (e.g., 50 vs 500) affects MySQL storage size and query performance, covering table creation, bulk data insertion, storage queries, index and full‑table scans, and explains the underlying reasons for any differences observed.

macrozheng
macrozheng
macrozheng
Do Varchar Lengths Really Impact MySQL Storage and Query Performance?

1. Problem Description

When designing table structures, a rule states that for variable‑length fields we should use the shortest possible length that satisfies the requirements. This article investigates why that rule exists, focusing on storage space and query performance, and tests whether varchar(50) and varchar(500) really differ.

2. Verify Storage Space Difference

1. Prepare two tables

<code>CREATE TABLE `category_info_varchar_50` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键',
  `name` varchar(50) NOT NULL COMMENT '分类名称',
  `is_show` tinyint(4) NOT NULL DEFAULT '0' COMMENT '是否展示:0 禁用,1启用',
  `sort` int(11) NOT NULL DEFAULT '0' COMMENT '序号',
  `deleted` tinyint(1) DEFAULT '0' COMMENT '是否删除',
  `create_time` datetime NOT NULL COMMENT '创建时间',
  `update_time` datetime NOT NULL COMMENT '更新时间',
  PRIMARY KEY (`id`) USING BTREE,
  KEY `idx_name` (`name`) USING BTREE COMMENT '名称索引'
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='分类';</code>
<code>CREATE TABLE `category_info_varchar_500` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键',
  `name` varchar(500) NOT NULL COMMENT '分类名称',
  `is_show` tinyint(4) NOT NULL DEFAULT '0' COMMENT '是否展示:0 禁用,1启用',
  `sort` int(11) NOT NULL DEFAULT '0' COMMENT '序号',
  `deleted` tinyint(1) DEFAULT '0' COMMENT '是否删除',
  `create_time` datetime NOT NULL COMMENT '创建时间',
  `update_time` datetime NOT NULL COMMENT '更新时间',
  PRIMARY KEY (`id`) USING BTREE,
  KEY `idx_name` (`name`) USING BTREE COMMENT '名称索引'
) ENGINE=InnoDB AUTO_INCREMENT=288135 DEFAULT CHARSET=utf8mb4 COMMENT='分类';</code>

2. Insert data

<code>DELIMITER $$
CREATE PROCEDURE batchInsertData(IN total INT)
BEGIN
    DECLARE start_idx INT DEFAULT 1;
    DECLARE end_idx INT;
    DECLARE batch_size INT DEFAULT 500;
    DECLARE insert_values TEXT;
    SET end_idx = LEAST(total, start_idx + batch_size - 1);
    WHILE start_idx <= total DO
        SET insert_values = '';
        WHILE start_idx <= end_idx DO
            SET insert_values = CONCAT(insert_values, CONCAT('(''name', start_idx, ''',0,0,0,NOW(),NOW()),'));
            SET start_idx = start_idx + 1;
        END WHILE;
        SET insert_values = LEFT(insert_values, LENGTH(insert_values) - 1);
        SET @sql = CONCAT('INSERT INTO category_info_varchar_50 (name, is_show, sort, deleted, create_time, update_time) VALUES ', insert_values, ';');
        PREPARE stmt FROM @sql;
        EXECUTE stmt;
        SET @sql = CONCAT('INSERT INTO category_info_varchar_500 (name, is_show, sort, deleted, create_time, update_time) VALUES ', insert_values, ';');
        PREPARE stmt FROM @sql;
        EXECUTE stmt;
        SET end_idx = LEAST(total, start_idx + batch_size - 1);
    END WHILE;
END$$
DELIMITER ;
CALL batchInsertData(1000000);
</code>

3. Query storage size

<code>SELECT table_schema AS "数据库",
       table_name   AS "表名",
       table_rows   AS "记录数",
       TRUNCATE(data_length/1024/1024,2) AS "数据容量(MB)",
       TRUNCATE(index_length/1024/1024,2) AS "索引容量(MB)"
FROM information_schema.TABLES
WHERE table_schema = 'test_mysql_field'
  AND TABLE_NAME = 'category_info_varchar_50'
ORDER BY data_length DESC, index_length DESC;</code>
<code>SELECT table_schema AS "数据库",
       table_name   AS "表名",
       table_rows   AS "记录数",
       TRUNCATE(data_length/1024/1024,2) AS "数据容量(MB)",
       TRUNCATE(index_length/1024/1024,2) AS "索引容量(MB)"
FROM information_schema.TABLES
WHERE table_schema = 'test_mysql_field'
  AND TABLE_NAME = 'category_info_varchar_500'
ORDER BY data_length DESC, index_length DESC;</code>

4. Conclusion

The two tables occupy the same amount of storage; the length of the VARCHAR column does not affect disk usage.

3. Verify Performance Difference

1. Index‑covering query

<code>select name from category_info_varchar_50 where name='name100000' -- 0.012s
select name from category_info_varchar_500 where name='name100000' -- 0.012s
select name from category_info_varchar_50 order by name; -- 0.370s
select name from category_info_varchar_500 order by name; -- 0.379s</code>

Index‑covering queries show negligible performance difference.

2. Range query and ORDER BY

<code>select * from category_info_varchar_50 where name='name100000' -- 0.012s
select * from category_info_varchar_500 where name='name100000' -- 0.012s
select * from category_info_varchar_50 where name in ('name100','name1000',...,'name9900000') -- 0.011s‑0.014s
select * from category_info_varchar_500 where name in ('name100','name1000',...,'name9900000') -- 0.011s‑0.014s</code>

Range queries are similar; adding ORDER BY introduces a slight overhead.

3. Full‑table scan with and without sorting

<code>select * from category_info_varchar_50 order by name; -- 1.498s
select * from category_info_varchar_500 order by name; -- 4.875s</code>

Without sorting, both tables have similar performance; with sorting, the longer VARCHAR incurs a much higher cost.

Analysis

MySQL estimates memory usage based on the defined column length. When the length is excessively large, the estimated memory exceeds

sort_buffer_size

, causing MySQL to spill to temporary files and dramatically slowing queries. Increasing

sort_buffer_size

or reducing column length mitigates this.

4. Final Conclusion

When a variable‑length column is used for sorting, an over‑large definition leads to inaccurate memory estimation, temporary‑file sorting, and poor query performance.

performanceMySQLDatabase DesignVARCHARindexstoragesort_buffer_size
macrozheng
Written by

macrozheng

Dedicated to Java tech sharing and dissecting top open-source projects. Topics include Spring Boot, Spring Cloud, Docker, Kubernetes and more. Author’s GitHub project “mall” has 50K+ stars.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.