Backend Development 9 min read

Multi‑Stage Optimization of Category Tree Queries in a SpringBoot Application

This article details a step‑by‑step performance optimization of a category‑tree query in a SpringBoot‑Thymeleaf e‑commerce system, covering initial Redis caching, periodic job updates, local Caffeine cache, Gzip compression, data slimming, and byte‑level compression to dramatically improve response times and reduce Redis key size.

IT Services Circle
IT Services Circle
IT Services Circle
Multi‑Stage Optimization of Category Tree Queries in a SpringBoot Application

In an e‑commerce system built with SpringBoot and the Thymeleaf template engine, a simple category‑tree query was initially implemented by directly fetching category data from the database and assembling it into a tree structure for each request.

As the number of categories grew, performance bottlenecks appeared, prompting a series of optimizations.

First Optimization – Redis Cache

The API was modified to check Redis for a cached JSON representation of the category tree. If present, the cached data is returned; otherwise, the database is queried, the result is cached in Redis with a 5‑minute TTL, and then returned.

Diagram of the flow:

Second Optimization – Periodic Job

A scheduled Job runs every 5 minutes to asynchronously rebuild the category tree from the database and refresh the Redis cache, while retaining the original on‑demand cache‑populate logic as a fallback.

Third Optimization – Local Caffeine Cache

To reduce the load on Redis, a local in‑memory cache using Spring’s Caffeine library was introduced. The request flow now checks the local cache first, falls back to Redis, and finally to the database if needed, updating each cache layer accordingly.

Check local cache; if hit, return.

If miss, query Redis; if hit, populate local cache and return.

If Redis also misses, query the database, refresh Redis and local caches, then return.

Local cache entries are set with a 5‑minute expiration to keep data reasonably fresh.

Fourth Optimization – Nginx GZip Compression

Because the category tree payload grew to about 1 MB, enabling GZip compression in Nginx reduced the response size to roughly 100 KB, improving network transfer time.

Fifth Optimization – Data Slimming and Byte‑Level Compression

To address Redis large‑key issues, the JSON payload was trimmed by removing unnecessary fields (e.g., inDate , inUserId , inUserName ) and renaming remaining fields to short keys ( i , l , n , p , c ).

@AllArgsConstructor
@Data
public class Category {
    private Long id;
    private String name;
    private Long parentId;
    private Date inDate;
    private Long inUserId;
    private String inUserName;
    private List<Category> children;
}

After slimming, the class was refactored to:

@AllArgsConstructor
@Data
public class Category {
    /** 分类编号 */
    @JsonProperty("i")
    private Long id;

    /** 分类层级 */
    @JsonProperty("l")
    private Integer level;

    /** 分类名称 */
    @JsonProperty("n")
    private String name;

    /** 父分类编号 */
    @JsonProperty("p")
    private Long parentId;

    /** 子分类列表 */
    @JsonProperty("c")
    private List<Category> children;
}

The JSON string is then compressed to a byte array using GZip and stored in Redis via RedisTemplate . Retrieval decompresses the byte array back to JSON before rebuilding the tree, achieving a ten‑fold reduction in stored size.

After all optimizations, the homepage QPS increased from ~100 to over 500, and the Redis large‑key problem was resolved.

PerformanceRediscachingcaffeineSpringBootThymeleafgzip
IT Services Circle
Written by

IT Services Circle

Delivering cutting-edge internet insights and practical learning resources. We're a passionate and principled IT media platform.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.