10 Essential Backend Optimization Techniques Every Developer Should Master

This comprehensive guide explores ten critical backend optimization strategies—from defensive validation and batch N+1 query elimination to asynchronous processing, parallel execution, caching, connection pooling, compression, message queuing, and design patterns—providing practical examples, Go code snippets, and best‑practice insights to boost performance and reliability.

Sanyou's Java Diary
Sanyou's Java Diary
Sanyou's Java Diary
10 Essential Backend Optimization Techniques Every Developer Should Master

1. Introduction

Performance optimization is an evergreen topic in software development. As features grow and traffic increases, interface performance may degrade, especially under high concurrency. This article summarizes common optimization techniques to help developers improve their projects.

Performance Optimization Overview
Performance Optimization Overview

2. Defensive Design: Validation

2.1 Business Scenario

In web development, data validation is essential. Parameters are often validated for required fields, ranges, formats, regexes, security, and custom rules.

2.2 Cases

① Protocol Buffer Validation

When using protobuf, enable protoc-gen-validate (PGV) for automatic validation. Example rule:

string title = 1 [(validate.rules).string = {min_len: 1, max_len: 100}];
Before saving to the database, perform length checks to prevent overflow.

② Go Struct Validation

Use libraries like github.com/go-playground/validator to validate struct fields:

type User struct {
  FirstName string `validate:"required"`
  LastName  string `validate:"required"`
  Age       uint8  `validate:"gte=0,lte=130"`
  Email     string `validate:"required,email"`
  Gender    string `validate:"oneof=male female prefer_not_to"`
  FavouriteColor string `validate:"iscolor"`
  Addresses []*Address `validate:"required,dive,required"`
}

Custom validators can be implemented when built‑in ones are insufficient.

2.3 Summary

Proverb: Poor defensive design leads to runtime failures.

Defensive design anticipates misuse and reduces errors, making software safer and more reliable.

3. Batch Thinking: Solving the N+1 Problem

3.1 Business Scenario

The N+1 problem occurs when fetching a list of objects and then querying each object's details individually, causing excessive database or RPC calls.

3.2 Cases

① Looped RPC Calls

for _, id := range ids {
  record := GetDetail(id)
  // process record
}

Solution: fetch all details in a single batch.

records := GetDetails(ids)
// process records

Benchmark shows 10× speedup when processing 10 items in parallel.

3.3 Summary

Proverb: Many heads can move a boat faster.

Batch processing reduces load and improves performance.

4. Asynchronous Thinking: Solving Long‑Running Tasks

4.1 Business Scenario

Long‑running operations (e.g., data reporting, audio synthesis) can be offloaded to background tasks to reduce request latency.

4.2 Cases

① Sub‑process to Async/Coroutine

Wrap time‑consuming steps (e.g., audio synthesis) into asynchronous jobs with a task ID for progress tracking.

② Async in Databases and Message Queues

Redis uses bgsave and bgrewriteaof for asynchronous persistence. MySQL supports async, sync, and semi‑sync replication, each with trade‑offs.

4.3 Summary

Asynchronous programming improves responsiveness and concurrency but adds complexity such as error handling and potential race conditions.

5. Parallel Thinking: Improving Processing Efficiency

5.1 Business Scenario

Parallelism executes multiple tasks simultaneously, leveraging multi‑core CPUs or distributed systems.

5.2 Cases

① Parallel Subtitle Generation & COS Upload

Use errgroup to generate subtitles and upload them concurrently:

func TracksAsSrt(ctx context.Context, tracks []*Track) error {
  var eg errgroup.Group
  for _, track := range tracks {
    t := track
    eg.Go(func() error {
      filename := GetSrtFilename(t)
      srt := ConvertTrackToSrt(t)
      _, err := tools.NewSrtCosHelper().Upload(ctx, filename, srt)
      return err
    })
  }
  return eg.Wait()
}

Benchmarks show parallel execution reduces total time by roughly tenfold.

5.3 Summary

Proverb: Many hands make light work.

Parallel processing can also achieve batch effects without explicit batch APIs.

6. Space‑for‑Time Thinking: Reducing Latency

6.1 Business Scenario

Increasing memory usage (caches, indexes) can reduce computation time by avoiding repeated calculations or disk accesses.

6.2 Cases

Common caches:

Distributed: Redis, Memcached

Local: bigcache

6.3 Summary

While caching improves speed, it introduces consistency challenges, cache avalanche, penetration, and eviction issues that must be managed.

7. Connection Pool: Resource Reuse

7.1 Business Scenario

Connection pools pre‑create a set of connections (e.g., database, Redis, HTTP) to avoid the overhead of creating and destroying connections per request.

7.2 Cases

① go‑redis Pool

Core components include pool initialization, connection acquisition ( Get), release ( Put), monitoring, and keep‑alive settings.

7.3 Summary

Connection pools improve performance, resource utilization, and reliability in high‑concurrency environments.

8. Security Thinking: Vulnerability Protection

8.1 Business Scenario

Security should be considered from design through implementation, protecting against data leaks, injection, and other attacks.

8.2 Cases

Reference OWASP Go Secure Coding Guidelines, covering input validation, authentication, secret management, and common vulnerabilities.

8.3 Summary

Proverb: Prevent problems before they arise.

Integrating security early reduces risk and protects user data.

9. Compression: Reducing Transfer Time

9.1 Business Scenario

Large data transfers dominate latency; compression (e.g., Gzip, Brotli, Zstd) can dramatically reduce size and improve user experience.

9.2 Cases

① HTTP Content‑Encoding

Servers indicate compression via the Content-Encoding header, allowing clients to decompress.

② Build‑time Compression

Using Zstd on build artifacts saved ~90% of time compared to uncompressed transfers.

9.3 Summary

Choose compression algorithms based on ratio, speed, and resource consumption; no single best solution exists.

10. Decoupling with Message Queues

10.1 Business Scenario

Message queues enable asynchronous communication, load smoothing, system decoupling, and reliable processing.

10.2 Cases

① Decoupling Order and Inventory

Instead of direct RPC calls, the order service publishes a message; the inventory service consumes it, allowing independent scaling.

② Asynchronous Notifications

After user registration, email and SMS notifications are queued, returning a fast response to the client.

③ Peak‑shaving

Queue non‑critical tasks during traffic spikes, processing them later to keep system load stable.

④ Broadcast

One event (e.g., purchase) can trigger multiple downstream services (points, gifts, recommendations) via a single message.

⑤ Delayed Queues

Implement time‑based actions such as order timeout cancellation or scheduled notifications.

10.3 Summary

Message queues provide flexibility, reliability, and scalability across many scenarios.

11. Reuse: Design Patterns

11.1 Business Scenario

Design patterns offer reusable solutions to common software problems, improving maintainability and extensibility.

11.2 Cases

Categories:

Creational: Simple Factory, Factory Method, Abstract Factory, Builder, Prototype, Singleton

Structural: Facade, Adapter, Proxy, Composite, Flyweight, Decorator, Bridge

Behavioral: Mediator, Observer, Command, Iterator, Template Method, Strategy, State, Memento, Interpreter, Chain of Responsibility, Visitor

Implementations are available in Go at github.com/senghoo/golang-design-pattern .

11.3 Summary

Reuse proven patterns instead of reinventing solutions; this enhances code quality and system robustness.

design-patternsPerformance optimizationbackend developmentConcurrencysecurity
Sanyou's Java Diary
Written by

Sanyou's Java Diary

Passionate about technology, though not great at solving problems; eager to share, never tire of learning!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.