Backend Development 21 min read

Key Design Principles for High‑Concurrency Architecture and Read/Write Separation

This article explains the essential conditions, metrics, and scenario classifications for building high‑concurrency systems, then details common solutions such as database read/write separation, local and distributed caching, cache‑eviction policies, handling master‑slave lag, preventing cache penetration and avalanche, and applying CQRS to achieve scalable, high‑performance back‑end services.

Wukong Talks Architecture
Wukong Talks Architecture
Wukong Talks Architecture
Key Design Principles for High‑Concurrency Architecture and Read/Write Separation

High‑concurrency systems must satisfy three core requirements: high performance, high availability, and scalability, which together ensure fast response times, stable service, and the ability to handle traffic spikes.

Performance is measured by response‑time percentiles (PCT50, PCT99, PCT999) rather than simple averages, with typical thresholds of 200 ms average and PCT99 ≤ 1 s for acceptable user experience.

Availability is expressed as the proportion of uptime, often described using “nines” (e.g., 99.95% as a practical monitoring threshold).

Scalability is evaluated by the ratio of throughput increase to the number of added nodes, with 70‑80% considered sufficient for most workloads.

High‑concurrency scenarios are divided into read‑heavy, write‑heavy, and mixed cases, each requiring specific design patterns.

Database Read/Write Separation

Separate the database into a master (write) and one or more slaves (read). Applications route write SQL (INSERT/UPDATE/DELETE) to the master and read SQL (SELECT) to slaves, reducing read pressure on the primary node.

Routing can be implemented via a proxy layer (e.g., MySQL‑Proxy, MyCat) that parses SQL statements, or directly within the application using libraries such as GORM or ShardingJDBC.

Master‑slave replication may suffer from lag; solutions include synchronous replication, forced reads from the master for latency‑sensitive operations, and session‑level read‑master routing.

Local Cache

Local in‑process caches store frequently accessed data in memory, eliminating network latency but suffer from lack of sharing, language coupling, poor scalability, and volatility on process restart.

Distributed Cache

Redis and Memcached are the main open‑source distributed caches. Redis is preferred because it offers rich data types, persistence (RDB/AOF), high availability via replication and clustering, and built‑in sharding support.

Typical Redis usage flow: check cache → if hit, return; else read from DB → write to cache with TTL → subsequent reads hit the cache.

Cache Issues and Mitigations

Cache Penetration : requests for non‑existent keys bypass the cache and hit the DB. Mitigate by storing a placeholder (e.g., null) in Redis or using a Bloom filter to pre‑filter impossible keys.

Cache Avalanche : massive simultaneous expiration or Redis outage floods the DB. Prevent by randomizing TTLs and deploying highly available Redis clusters.

CQRS (Command Query Responsibility Segregation)

CQRS separates write (command) and read (query) paths. Writes go to a write store, emit events to a message queue, and the read store updates asynchronously. Queries are served from the read store, which can be a cache or a replica.

In database read/write separation, the master is the write store and slaves are the read stores; in a cache‑centric design, the DB is the write store and Redis is the read store, with binlog listeners acting as the message channel.

PerformanceDatabaseCachinghigh concurrencyRead-Write SeparationCQRS
Wukong Talks Architecture
Written by

Wukong Talks Architecture

Explaining distributed systems and architecture through stories. Author of the "JVM Performance Tuning in Practice" column, open-source author of "Spring Cloud in Practice PassJava", and independently developed a PMP practice quiz mini-program.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.