Backend Interface Performance Optimization: Common Issues and Practical Solutions

This article summarizes the typical causes of slow backend interfaces—such as MySQL slow queries, complex business logic, thread‑pool misconfiguration, lock contention and machine‑level problems—and provides concrete optimization techniques, code examples, and best‑practice recommendations for Java services.

Top Architect
Top Architect
Top Architect
Backend Interface Performance Optimization: Common Issues and Practical Solutions

Background

The author completed the functional development of a system in early 2021 and entered the promotion phase. As usage grew, many users praised the product while others complained about performance, especially interface latency. After a week of monitoring, more than 20 slow interfaces were identified, with five exceeding 5 seconds and one over 10 seconds, prompting a deep dive into performance tuning.

What can cause interface performance problems?

Database slow queries

Deep pagination

Missing indexes

Index invalidation

Too many JOINs or sub‑queries

IN clause with excessive elements

Large data volume

Complex business logic

Loop calls

Sequential calls

Improper thread‑pool design

Improper lock design

Machine issues (full GC, restarts, thread exhaustion)

Problem Solving

1. Slow Queries (MySQL)

1.1 Deep Pagination

Typical pagination query: select name, code from student limit 100, 20 When the offset becomes large (e.g., limit 1000000, 20), MySQL must scan and discard a huge number of rows, which is very slow. A better approach is to use a range condition on an indexed column:

select name, code from student where id > 1000000 limit 20

This forces MySQL to use the primary‑key index and jump directly to the required rows, but it requires the caller to pass the last retrieved id.

1.2 Missing Index

Identify missing indexes with: show create table xxxx; After confirming the need, add the appropriate index, preferably during low‑traffic periods to avoid locking the table.

1.3 Index Invalid

Even if an index exists, MySQL may decide not to use it. Common reasons include low cardinality columns or optimizer cost estimation. You can force index usage:

select name, code from student force index(XXXXXX) where name = '天才';

1.4 Too Many JOINs or Sub‑queries

Prefer rewriting sub‑queries as JOINs and keep the number of joined tables to 2‑3 when data volume is large. For very large joins MySQL may create temporary tables on disk, which is slow. A common workaround is to split the query: fetch data from the first table, build a map in memory, then fetch related data in a second query.

1.5 IN Clause with Too Many Elements

If an IN list is large, the query can become slow even with an index. Split the list into smaller batches or use multi‑threaded queries.

select id from student where id in (1,2,3,...,1000) limit 200;

1.6 Large Data Volume

When a single table holds massive data, simple query tuning is insufficient. Consider sharding, moving to a column‑store or a purpose‑built analytical database, and plan a full migration with rollback and fault‑tolerance strategies.

2. Complex Business Logic

2.1 Loop Calls

Example of a loop that calculates monthly data sequentially:

List<Model> list = new ArrayList<>();
for (int i = 0; i < 12; i++) {
    Model model = calOneMonthData(i);
    list.add(model);
}

These independent calculations can be parallelized with a thread pool:

public static ExecutorService commonThreadPool = new ThreadPoolExecutor(
    5, 5, 300L, TimeUnit.SECONDS,
    new LinkedBlockingQueue<>(10), commonThreadFactory, new ThreadPoolExecutor.AbortPolicy());

List<Future<Model>> futures = new ArrayList<>();
for (int i = 0; i < 12; i++) {
    Future<Model> future = commonThreadPool.submit(() -> calOneMonthData(i));
    futures.add(future);
}
List<Model> list = new ArrayList<>();
for (Future<Model> f : futures) {
    list.add(f.get());
}

2.2 Sequential Calls

When tasks are independent but executed sequentially, they can also be parallelized using CompletableFuture:

CompletableFuture<A> futureA = CompletableFuture.supplyAsync(() -> doA());
CompletableFuture<B> futureB = CompletableFuture.supplyAsync(() -> doB());
CompletableFuture.allOf(futureA, futureB);
C c = doC(futureA.join(), futureB.join());

CompletableFuture<D> futureD = CompletableFuture.supplyAsync(() -> doD(c));
CompletableFuture<E> futureE = CompletableFuture.supplyAsync(() -> doE(c));
CompletableFuture.allOf(futureD, futureE);
return doResult(futureD.join(), futureE.join());

3. Thread‑Pool Design Issues

A typical thread pool has three key parameters: core size, maximum size, and work queue. If the core pool is too small, parallelism suffers; if the queue fills up, tasks wait; if the maximum size is reached, tasks may be rejected. Adjust these parameters according to business load, or create separate pools for different services.

4. Lock Design Issues

4.1 Wrong Lock Type

Using a mutual‑exclusion lock where a read‑write lock would be appropriate can degrade performance in read‑heavy scenarios.

4.2 Over‑Coarse Lock

Example of a method that locks the whole operation, even though only the data‑calculation part needs protection:

public synchronized void doSome() {
    File f = calData();
    uploadToS3(f);
    sendSuccessMessage();
}

Refactor to narrow the synchronized block:

public void doSome() {
    File f = null;
    synchronized (this) {
        f = calData();
    }
    uploadToS3(f);
    sendSuccessMessage();
}

5. Machine‑Level Problems

Full GC, unexpected restarts, or thread exhaustion can also cause latency spikes. Diagnose with monitoring tools, split large transactions, and redesign thread pools accordingly.

6. General Remedies

6.1 Caching

Cache frequently read, rarely changed data using in‑process structures ( Map, Guava Cache) or external stores ( Redis, Tair, Memcached). Proper key design is crucial for hit‑rate.

6.2 Callback / Async Pattern

For long‑running downstream calls (e.g., payment gateway), return a fast “processing” response and notify the caller later via a callback or a message queue such as Kafka.

Conclusion

The article provides a practical checklist for diagnosing and fixing backend interface performance problems, ranging from SQL tuning and concurrency control to infrastructure‑level adjustments and generic strategies like caching and asynchronous processing.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

BackendOptimizationCachingthread-pool
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.