How to Turn a 3‑Second Category Tree into 30 ms: A Real‑World Backend Optimization
This article analyzes a severe N+1 query performance disaster in a Java Spring Boot project, explains why the traditional recursive approach is slow, and presents a production‑tested solution that reduces database calls to one, uses O(n) tree construction, and adds multi‑level caching to achieve a 100‑fold speedup.
A Real Performance Disaster
The homepage category tree loading in a fast‑growing project suffered from three major issues: users waited 3‑5 seconds for the tree, the database connection pool was exhausted during peak traffic, and developers kept applying temporary fixes that increased technical debt.
Root cause: a traditional recursive query caused an N+1 problem, resulting in 15,000 database queries for 15,000 nodes.
After optimization the response time dropped from 3 seconds to 30 ms, a 100× improvement.
Why the Traditional Approach Is So Slow?
N+1 Query Disaster
// This code looks simple but is a performance killer
public List<Category> getCategoryTree() {
List<Category> roots = categoryMapper.getRootCategories(); // 1 query
for (Category root : roots) {
loadChildren(root); // each node triggers recursive queries
}
return roots;
}
private void loadChildren(Category parent) {
List<Category> children = categoryMapper.getByParentId(parent.getId()); // N queries
parent.setChildren(children);
for (Category child : children) {
loadChildren(child); // recursion continues
}
}Problem analysis:
10,000 nodes = 10,000 database queries
Each query takes ~2 ms, total ~20 s
Connection pool quickly exhausted
Massive temporary objects cause heavy GC pressure
Performance Test Data Comparison
When the number of nodes increases, the traditional recursive solution quickly reaches timeout, while the optimized solution stays under 50 ms even for 50,000 nodes.
Core Solution: One Query + O(n) Algorithm
Solution Idea
Core concept: turn N+1 queries into a single query plus an in‑memory O(n) tree construction.
Steps:
1️⃣ Batch query all valid categories: SELECT * FROM category WHERE status = 1 2️⃣ Build a HashMap index (O(1) lookup for parent‑child relationships)
3️⃣ Single pass to assemble the tree (O(n) time)
4️⃣ Cache the built result to avoid recomputation
Database Design
CREATE TABLE category (
id BIGINT PRIMARY KEY,
name VARCHAR(100) NOT NULL,
parent_id BIGINT,
level INT NOT NULL DEFAULT 0,
sort_order INT DEFAULT 0,
status TINYINT DEFAULT 1,
create_time DATETIME,
update_time DATETIME,
INDEX idx_parent_id (parent_id),
INDEX idx_level (level),
INDEX idx_status (status)
);Design points:
Level field enables batch queries and optimized sorting
Composite index (parent_id, sort_order) supports ordered retrieval
Status field allows soft delete and status filtering
Algorithm Complexity Analysis: O(n²) → O(n) Breakthrough
Traditional Recursive Complexity
// Traditional approach: time O(n²), space O(n)
public List<Category> getChildren(Long parentId) {
List<Category> children = categoryMapper.getByParentId(parentId); // 1 query
for (Category child : children) {
child.setChildren(getChildren(child.getId())); // recursive query
}
return children;
}Complexity analysis:
Time: O(n²) – each of the n nodes triggers a query
Space: O(n) – recursion stack depth equals tree height
Database queries: O(n)
Optimized Algorithm Complexity
// Optimized approach: time O(n), space O(n)
public <T extends TreeNode<T>> List<T> buildTree(List<T> nodes, Object rootValue) {
// 1. Build HashMap index – O(n)
Map<Object, T> nodeMap = nodes.stream()
.collect(Collectors.toMap(TreeNode::getId, node -> node));
List<T> rootNodes = new ArrayList<>();
// 2. Single pass to establish parent‑child links – O(n)
for (T node : nodes) {
Object parentId = node.getParentId();
if (Objects.equals(parentId, rootValue)) {
rootNodes.add(node);
} else {
T parent = nodeMap.get(parentId);
if (parent != null) {
parent.addChild(node);
}
}
}
// Optional: sort recursively – O(n log n)
sortTreeRecursively(rootNodes);
return rootNodes;
}Complexity analysis:
Time: O(n) – HashMap build + single traversal
Space: O(n) – HashMap stores n nodes
Database queries: O(1) – only one batch query
High‑Performance O(n) Tree Construction Algorithm
Core Implementation
@Component
public class TreeBuilder {
/**
* High‑performance tree building algorithm – O(n) time complexity
* @param nodes all nodes list
* @param rootValue parent_id value of the root (usually null or 0)
* @return built tree structure
*/
public <T extends TreeNode<T>> List<T> buildTree(List<T> nodes, Object rootValue) {
if (CollectionUtils.isEmpty(nodes)) {
return new ArrayList<>();
}
// 1. Build ID→Node fast index, O(n)
Map<Object, T> nodeMap = nodes.stream()
.collect(Collectors.toMap(TreeNode::getId, node -> node));
List<T> rootNodes = new ArrayList<>();
// 2. Single traversal to establish parent‑child relations, O(n)
for (T node : nodes) {
Object parentId = node.getParentId();
if (Objects.equals(parentId, rootValue)) {
rootNodes.add(node);
} else {
T parent = nodeMap.get(parentId);
if (parent != null) {
parent.addChild(node);
}
}
}
// 3. Optional recursive sorting – O(n log n)
sortTreeRecursively(rootNodes);
return rootNodes;
}
private <T extends TreeNode<T>> void sortTreeRecursively(List<T> nodes) {
if (CollectionUtils.isEmpty(nodes)) {
return;
}
// Sort by sort_order
nodes.sort(Comparator.comparing(TreeNode::getSortOrder,
Comparator.nullsLast(Comparator.naturalOrder())));
// Recursively sort children
for (T node : nodes) {
if (node.hasChildren()) {
sortTreeRecursively(node.getChildren());
}
}
}
}Tree Node Base Interface
public interface TreeNode<T> {
Object getId();
Object getParentId();
Integer getSortOrder();
List<T> getChildren();
void setChildren(List<T> children);
default void addChild(T child) {
if (getChildren() == null) {
setChildren(new ArrayList<>());
}
getChildren().add(child);
}
default boolean hasChildren() {
return getChildren() != null && !getChildren().isEmpty();
}
}Complete Business Implementation
Entity Class
@Data
@TableName("category")
public class Category implements TreeNode<Category> {
@TableId(type = IdType.AUTO)
private Long id;
private String name;
private Long parentId;
private Integer level;
private Integer sortOrder;
private Integer status;
@JsonFormat(pattern = "yyyy-MM-dd HH:mm:ss")
private LocalDateTime createTime;
@JsonFormat(pattern = "yyyy-MM-dd HH:mm:ss")
private LocalDateTime updateTime;
@TableField(exist = false)
private List<Category> children;
@Override
public void addChild(Category child) {
if (children == null) {
children = new ArrayList<>();
}
children.add(child);
}
}Mapper
@Mapper
public interface CategoryMapper extends BaseMapper<Category> {
/** Get all active categories for tree building */
@Select("SELECT id, name, parent_id, level, sort_order, status, create_time, update_time FROM category WHERE status = 1 ORDER BY level, parent_id, sort_order")
List<Category> selectAllForTree();
/** Get children by parent ID */
@Select("SELECT * FROM category WHERE parent_id = #{parentId} AND status = 1 ORDER BY sort_order")
List<Category> selectByParentId(@Param("parentId") Long parentId);
}Service Layer
@Service
@Transactional(rollbackFor = Exception.class)
public class CategoryServiceImpl implements CategoryService {
@Autowired
private CategoryMapper categoryMapper;
@Autowired
private TreeBuilder treeBuilder;
@Cacheable(value = "category:tree", key = "'all'")
public List<Category> getCategoryTree() {
// 1. One‑time query for all data
List<Category> allCategories = categoryMapper.selectAllForTree();
// 2. Build tree with high‑performance algorithm
return treeBuilder.buildTree(allCategories, null);
}
@CacheEvict(value = "category:tree", allEntries = true)
public Category createCategory(Category category) {
if (category.getParentId() != null) {
Category parent = categoryMapper.selectById(category.getParentId());
category.setLevel(parent.getLevel() + 1);
} else {
category.setLevel(0);
}
category.setStatus(1);
category.setCreateTime(LocalDateTime.now());
category.setUpdateTime(LocalDateTime.now());
categoryMapper.insert(category);
return category;
}
@CacheEvict(value = "category:tree", allEntries = true)
public Category updateCategory(Category category) {
category.setUpdateTime(LocalDateTime.now());
categoryMapper.updateById(category);
return category;
}
@CacheEvict(value = "category:tree", allEntries = true)
public void deleteCategory(Long id) {
Category category = new Category();
category.setId(id);
category.setStatus(0);
category.setUpdateTime(LocalDateTime.now());
categoryMapper.updateById(category);
}
}Controller Layer
@RestController
@RequestMapping("/api/categories")
public class CategoryController {
@Autowired
private CategoryService categoryService;
@GetMapping("/tree")
public Result<List<Category>> getCategoryTree() {
List<Category> tree = categoryService.getCategoryTree();
return Result.success(tree);
}
@PostMapping
public Result<Category> createCategory(@RequestBody @Valid Category category) {
Category created = categoryService.createCategory(category);
return Result.success(created);
}
@PutMapping("/{id}")
public Result<Category> updateCategory(@PathVariable Long id, @RequestBody @Valid Category category) {
category.setId(id);
Category updated = categoryService.updateCategory(category);
return Result.success(updated);
}
@DeleteMapping("/{id}")
public Result<Void> deleteCategory(@PathVariable Long id) {
categoryService.deleteCategory(id);
return Result.success();
}
}Cache Optimization Strategy
Multi‑Level Cache Configuration (Caffeine + Redis)
@Configuration
@EnableCaching
public class MultiLevelCacheConfig {
/** Multi‑level cache manager: L1(Caffeine) + L2(Redis) */
@Bean
@Primary
public CacheManager multiLevelCacheManager(RedisConnectionFactory connectionFactory) {
return new MultiLevelCacheManager(
createCaffeineCacheManager(),
createRedisCacheManager(connectionFactory)
);
}
private CaffeineCacheManager createCaffeineCacheManager() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager();
cacheManager.setCaffeine(Caffeine.newBuilder()
.initialCapacity(100)
.maximumSize(1000)
.expireAfterWrite(10, TimeUnit.MINUTES)
.recordStats());
return cacheManager;
}
private RedisCacheManager createRedisCacheManager(RedisConnectionFactory connectionFactory) {
RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofHours(2))
.serializeKeysWith(RedisSerializationContext.SerializationPair.fromSerializer(new StringRedisSerializer()))
.serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(new GenericJackson2JsonRedisSerializer()))
.disableCachingNullValues();
return RedisCacheManager.builder(connectionFactory)
.cacheDefaults(config)
.build();
}
}Multi‑Level Cache Implementation
public class MultiLevelCacheManager implements CacheManager {
private final CacheManager l1CacheManager; // local cache
private final CacheManager l2CacheManager; // distributed cache
public MultiLevelCacheManager(CacheManager l1CacheManager, CacheManager l2CacheManager) {
this.l1CacheManager = l1CacheManager;
this.l2CacheManager = l2CacheManager;
}
@Override
public Cache getCache(String name) {
Cache l1Cache = l1CacheManager.getCache(name);
Cache l2Cache = l2CacheManager.getCache(name);
return new MultiLevelCache(name, l1Cache, l2Cache);
}
@Override
public Collection<String> getCacheNames() {
Set<String> names = new HashSet<>();
names.addAll(l1CacheManager.getCacheNames());
names.addAll(l2CacheManager.getCacheNames());
return names;
}
}
public class MultiLevelCache implements Cache {
private final String name;
private final Cache l1Cache;
private final Cache l2Cache;
public MultiLevelCache(String name, Cache l1Cache, Cache l2Cache) {
this.name = name;
this.l1Cache = l1Cache;
this.l2Cache = l2Cache;
}
@Override public String getName() { return name; }
@Override public Object getNativeCache() { return this; }
@Override public ValueWrapper get(Object key) {
ValueWrapper v1 = l1Cache.get(key);
if (v1 != null) return v1;
ValueWrapper v2 = l2Cache.get(key);
if (v2 != null) {
l1Cache.put(key, v2.get());
return v2;
}
return null;
}
@Override public <T> T get(Object key, Class<T> type) {
ValueWrapper wrapper = get(key);
return wrapper != null ? type.cast(wrapper.get()) : null;
}
@Override public <T> T get(Object key, Callable<T> valueLoader) {
ValueWrapper wrapper = get(key);
if (wrapper != null) return (T) wrapper.get();
try {
T value = valueLoader.call();
put(key, value);
return value;
} catch (Exception e) {
throw new RuntimeException(e);
}
}
@Override public void put(Object key, Object value) {
l1Cache.put(key, value);
l2Cache.put(key, value);
}
@Override public void evict(Object key) {
l1Cache.evict(key);
l2Cache.evict(key);
}
@Override public void clear() {
l1Cache.clear();
l2Cache.clear();
}
}Common Questions & Solutions
Q1: How to keep cache consistent when data updates?
Solution: Use Write‑Invalidate mode with @CacheEvict on update/delete methods, optionally add a delayed double‑delete for high‑concurrency scenarios.
@CacheEvict(value = "category:tree", allEntries = true)
@Transactional
public void updateCategory(Category category) {
categoryMapper.updateById(category);
}
@Async
public void delayedCacheEvict() {
try { Thread.sleep(500); } catch (InterruptedException ignored) {}
cacheManager.getCache("category:tree").clear();
}Q2: What if the tree depth causes stack overflow?
Solution: Replace recursion with an iterative breadth‑first traversal.
public void sortTreeIteratively(List<TreeNode> roots) {
Queue<TreeNode> queue = new LinkedList<>(roots);
while (!queue.isEmpty()) {
TreeNode node = queue.poll();
if (node.hasChildren()) {
node.getChildren().sort(Comparator.comparing(TreeNode::getSortOrder));
queue.addAll(node.getChildren());
}
}
}Q3: How to handle memory shortage with massive data?
Solution: Use pagination or lazy loading, loading only required depth or children on demand.
public List<Category> getCategoryTreeLazy(int maxDepth) {
List<Category> categories = categoryMapper.selectByMaxLevel(maxDepth);
return treeBuilder.buildTree(categories, null);
}
public List<Category> loadChildren(Long parentId) {
return categoryMapper.selectByParentId(parentId);
}Summary: From 3 seconds to 30 ms – Core Takeaways
Three Key Breakthroughs
1. Algorithm breakthrough: O(n²) → O(n) by using a HashMap index and a single traversal, eliminating recursive calls and nested loops, and achieving cache‑friendly memory access.
2. Database breakthrough: N+1 queries → 1 batch query, supported by proper indexing and bulk retrieval.
3. Cache breakthrough: No cache → ~95 % hit rate with a two‑level (local + distributed) cache, smart invalidation, and warm‑up mechanisms.
Action Guide
If your tree query exceeds 500 ms, follow these steps:
Replace the existing recursive logic with the provided TreeBuilder implementation.
Add the recommended indexes to the category table.
Integrate Spring Cache using the multi‑level configuration.
Run performance tests to verify the improvement.
Expected benefits: >90 % response‑time reduction, >10× concurrency capacity, >80 % reduction in database load, and a markedly better user experience.
Final Thoughts
Performance optimization is an iterative process. The solutions presented have been validated in production, but each business scenario may require further tuning. Start with a simple adjacency‑list model, then evolve the design as the system grows.
macrozheng
Dedicated to Java tech sharing and dissecting top open-source projects. Topics include Spring Boot, Spring Cloud, Docker, Kubernetes and more. Author’s GitHub project “mall” has 50K+ stars.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
