Understanding MySQL Adaptive Hash Index (AHI): Construction, Operation, and Maintenance
This article explains how MySQL's Adaptive Hash Index (AHI) works, why it is needed to reduce B‑tree lookup costs, the three stages of building an AHI, practical examples of its construction, performance benefits, and operational recommendations for managing AHI in production environments.
Adaptive Hash Index (AHI) is a MySQL feature that creates a hash table to cache frequently accessed index pages, aiming to reduce the cost of traversing deep B+‑tree structures as table sizes grow.
The need for AHI arises because larger tables increase the number of B+‑tree levels, making each lookup more expensive; a hash‑based cache can jump directly to the target data page.
AHI must be "just right"—the hash table cannot be too large (high maintenance cost) nor too small (low hit rate). The adaptive nature balances these constraints.
The construction of an AHI proceeds through three "levels":
Level 1: The index tree must be used frequently enough; otherwise building an AHI would waste resources.
Level 2: Specific search conditions ("hash info") must be frequently used. Hash info captures how many leading index columns match, the number of matching bytes in the first mismatched column, and the match direction.
Examples of hash info:
For a table with index (A1, A2), the condition (A1=1 AND A2=1) yields hash info (2,0,true).
The condition (A1=1) yields hash info (1,0,true).
Level 3: The data pages referenced by the index must be accessed frequently; otherwise caching every page would defeat the purpose of AHI.
Once the three levels are satisfied, AHI can be built. An example scenario:
Table table1 has columns A1, A2, A3, B1 and indexes Idx1(A1,A2,A3) and Idx2(B1).
Level 1 selects Idx1.
Level 2 selects hash info (2,0,true) because many queries use the left two columns.
Level 3 selects a frequently used data page P3 containing rows like (1,1,1,1) and (1,2,2,2).
During AHI construction, each row in the chosen page is hashed using the relevant leading columns (e.g., (1,1) → P3, (1,2) → P3), creating a mapping from hash value to the data page.
When a query such as A1=1 AND A2=2 is executed, MySQL finds that it matches Idx1, derives the hash value (1,2) from the hash info, looks up this value in the AHI, and directly obtains the target page P3, skipping the multiple B+‑tree steps.
This shortcut can reduce the number of index‑tree traversals from four steps to a single hash lookup, improving performance.
Summary
Large tables increase B+‑tree depth and lookup cost.
AHI provides a cache to lower page‑lookup cost.
The adaptive aspect ensures the hash table is neither too large nor too small.
AHI is built only for frequently used indexes, hash infos, and data pages.
Operational Recommendations
Configure innodb_adaptive_hash_index_parts to reduce lock contention by partitioning the cache.
Use SHOW ENGINE INNODB STATUS to monitor AHI partition usage and hit rates.
If hit rates are low, consider adjusting access patterns, isolating hot/cold data, or disabling AHI to save maintenance overhead.
Check MySQL bug lists for older versions, as early releases had AHI‑related issues that are fixed in newer releases.
Aikesheng Open Source Community
The Aikesheng Open Source Community provides stable, enterprise‑grade MySQL open‑source tools and services, releases a premium open‑source component each year (1024), and continuously operates and maintains them.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.