Backend Interface Performance Optimization: Common Issues and Practical Solutions
This article summarizes typical causes of backend interface latency—such as slow MySQL queries, complex business logic, thread‑pool and lock misconfigurations, and machine constraints—and presents concrete optimization techniques including pagination fixes, indexing strategies, multithreading, proper thread‑pool tuning, lock refinement, caching, and asynchronous callbacks.
The author, a senior architect, describes how their system entered a promotion phase in early 2021 and began receiving numerous performance complaints, prompting a focused effort on interface optimization.
Typical causes of interface performance problems include:
Database slow queries (deep pagination, missing indexes, index invalidation, excessive joins or subqueries, large IN lists, massive data volume)
Complex business logic (loop calls, sequential calls)
Improper thread‑pool design
Inadequate lock design
Machine issues (full GC, restarts, thread saturation)
Solutions:
1. Slow Queries (MySQL)
1.1 Deep Pagination
Typical pagination query: select name,code from student limit 100,20 When offset becomes large, performance degrades. A better approach is to use a primary‑key condition:
select name,code from student where id>1000000 limit 201.2 Missing Index
Check indexes with: show create table xxxx When adding indexes, consider selectivity and avoid locking the table during peak hours.
1.3 Index Invalid
Sometimes MySQL ignores an index; you can force its use:
select name,code from student force index(XXXXXX) where name='天才'1.4 Excessive Joins or Subqueries
Prefer joins over subqueries and limit the number of joined tables; for large joins, consider fetching data in multiple steps and assembling in business logic.
1.5 Too Many IN Elements
Split large IN lists into batches or limit the size in code:
if (ids.size() > 200) { throw new Exception("单次查询数据量不能超过200"); }1.6 Massive Data Volume
When data size is huge, consider sharding, moving to a more suitable storage system, or redesigning the data model.
2. Complex Business Logic
2.1 Loop Calls
Parallelize independent calculations using a thread pool:
List<Model> list = new ArrayList<>();
for (int i = 0; i < 12; i++) {
Model model = calOneMonthData(i);
list.add(model);
}
// Multithreaded version
public static ExecutorService commonThreadPool = new ThreadPoolExecutor(5,5,300L,TimeUnit.SECONDS,new LinkedBlockingQueue<>(10),commonThreadFactory,new ThreadPoolExecutor.DiscardPolicy());
List<Future<Model>> futures = new ArrayList<>();
for (int i = 0; i < 12; i++) {
Future<Model> future = commonThreadPool.submit(() -> calOneMonthData(i));
futures.add(future);
}
List<Model> list = new ArrayList<>();
for (Future<Model> f : futures) {
list.add(f.get());
}2.2 Sequential Calls
Use CompletableFuture to run independent tasks in parallel:
CompletableFuture<A> futureA = CompletableFuture.supplyAsync(() -> doA());
CompletableFuture<B> futureB = CompletableFuture.supplyAsync(() -> doB());
CompletableFuture.allOf(futureA,futureB);
C c = doC(futureA.join(), futureB.join());
CompletableFuture<D> futureD = CompletableFuture.supplyAsync(() -> doD(c));
CompletableFuture<E> futureE = CompletableFuture.supplyAsync(() -> doE(c));
CompletableFuture.allOf(futureD,futureE);
return doResult(futureD.join(), futureE.join());3. Thread‑Pool Design Issues
Adjust core size, max size, and queue capacity to match workload; avoid sharing pools across unrelated services.
4. Lock Design Issues
Use fine‑grained locks and appropriate lock types (e.g., read‑write locks). Example of reducing lock scope:
public synchronized void doSome() {
File f = calData();
uploadToS3(f);
sendSuccessMessage();
}
// Refactored
public void doSome() {
File f = null;
synchronized(this) {
f = calData();
}
uploadToS3(f);
sendSuccessMessage();
}5. Machine Problems
Full GC, process restarts, thread exhaustion—monitor and isolate long‑running tasks, split large transactions, and redesign thread pools as needed.
6. Generic Remedies
6.1 Caching
Use in‑memory caches (Map, Guava) or distributed caches (Redis, Tair, Memcached) to serve read‑heavy data.
6.2 Callback / Async Processing
Return early after quick validation and store a “processing” state, then perform slow downstream calls asynchronously and notify the caller via callbacks or message queues.
In conclusion, the article provides a concise checklist for diagnosing and addressing backend interface latency, encouraging discussion and sharing of further experiences.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
