Backend Development 12 min read

Optimizing a High‑Concurrency Interface: Reducing Response Time from 30 seconds to 0.8 seconds

This article presents a real‑world case study of a high‑concurrency data‑processing API that originally took over 30 seconds to respond, explains how the bottleneck was identified in SQL and Java code, and details a series of optimizations—including SQL aggregation, moving logic to the database, and introducing Caffeine caching—that ultimately brought the response time under one second.

Top Architect
Top Architect
Top Architect
Optimizing a High‑Concurrency Interface: Reducing Response Time from 30 seconds to 0.8 seconds

Scenario

In a high‑concurrency data‑processing scenario, an interface that processes over 2 million records originally required about 30 seconds to respond, which is unacceptable for a transaction‑interception system.

Problem Diagnosis

Initial investigation ruled out network and hardware issues and revealed that the SQL query itself took roughly 30 seconds. The MyBatis call List<Map<String, Object>> list = transhandleFlowMapper.selectDataTransHandleFlowAdd(selectSql); fetched 2 million rows, and the corresponding PostgreSQL query also took about 800 ms when run directly, indicating extra overhead in the Java layer.

SQL‑Level Analysis

Using EXPLAIN ANALYZE showed that the query was scanning a large table without efficient aggregation, and the result set was being materialized as a list of maps, which is costly.

Code‑Level Analysis

Each row was converted into a Map object, creating millions of objects and causing significant GC and memory pressure. The key programhandleIdList held an array of IDs per row.

Optimization Measures

1. SQL Optimization

The goal was to collapse the 2 million rows into a single aggregated result using PostgreSQL’s array_agg and unnest functions. The new query looks like:

SELECT array_agg(elem) AS concatenated_array
FROM (
    SELECT unnest(programhandleidlist) AS elem
    FROM anti_transhandle
    WHERE create_time BETWEEN '2024-01-08 00:00:00.0' AND '2024-01-09 00:00:00.0'
) sub;

This reduces the amount of data transferred to the application and brings the SQL execution time down to about 0.8 seconds.

2. Moving Business Logic to the Database

To further cut data volume, the counting of each ID’s occurrence was performed directly in SQL:

SELECT elem, COUNT(*) AS count
FROM (
    SELECT unnest(programhandleidlist) AS elem
    FROM anti_transhandle
    WHERE create_time BETWEEN '2024-01-08 00:00:00.0' AND '2024-01-09 00:00:00.0'
) sub
GROUP BY elem;

This query returns two columns (element and its frequency) in about 0.7 seconds.

3. Introducing a Cache Layer

A local cache using Caffeine was added to avoid repeated database hits for historical data. The Maven dependency is:

<dependency>
    <groupId>com.github.ben-manes.caffeine</groupId>
    <artifactId>caffeine</artifactId>
    <version>3.1.8</version>
</dependency>

The cache configuration (LRU, max size 500, expire after 60 minutes) is defined in a Spring @Configuration class, and the service method retrieves the previous day’s hit rate from the cache before falling back to the database.

Summary

After applying SQL aggregation, moving part of the logic into the database, and adding Caffeine caching, the interface’s total response time dropped from 30 seconds to under 0.8 seconds. The case also highlights the limitations of relational databases for massive analytical workloads and suggests column‑oriented stores (e.g., ClickHouse, Hive) for truly millisecond‑level queries.

backend developmentPerformance TuningMyBatisPostgreSQLSQL OptimizationCaffeine Cache
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.