Backend Development 10 min read

System Splitting and Architectural Evolution: Strategies for Scaling and Decoupling

The article explains how increasing business complexity and throughput demands drive system splitting—horizontal scaling, vertical and business sharding, and database partitioning—and describes the subsequent architectural evolution toward micro‑services, caching, indexing, and asynchronous processing to improve capacity, robustness, and performance.

JD Tech
JD Tech
JD Tech
System Splitting and Architectural Evolution: Strategies for Scaling and Decoupling

Author: Xu Xianjun – JD System Architect

As business complexity and system throughput grow, unified deployment becomes difficult, modules interfere with each other, and the system becomes heavy and fragile; therefore, business decomposition, system decoupling, and internal architecture upgrades are needed to improve capacity and robustness.

System Splitting

System splitting can be viewed from a resource perspective as application splitting and database splitting, and from the order of adoption as horizontal scaling, vertical splitting, business splitting, and horizontal splitting.

Figure 1: Principles of System Decomposition

1. Horizontal Scaling

Horizontal scaling is the initial solution and the first choice when a system hits a bottleneck, expanding in two ways:

Adding application instances and clustering to increase throughput.

Using master‑slave replication for read/write separation in databases, protecting the most critical resource.

2. Vertical Splitting

Vertical splitting truly begins system decomposition, separating business functions such as user, product, and transaction systems. Service governance is introduced to handle inter‑service dependencies, improving decoupling and stability; degradation mechanisms prevent cascading failures.

Corresponding databases are also split into user, product, transaction databases, etc.

3. Business Splitting

Business splitting targets the application layer, dividing features like shopping cart, checkout, order, and flash‑sale systems. For flash‑sale, product data can be pre‑loaded into JVM cache to reduce external calls and improve performance.

Database splitting steps include vertical partitioning, vertical sharding, horizontal partitioning, and horizontal sharding.

Vertical Partitioning breaks a large table into smaller tables based on field update or query frequency.

Figure 2: Product Table Partitioning

Vertical Sharding separates databases by business, e.g., order, product, and user databases.

Horizontal Partitioning splits a large table into multiple tables to handle data volume.

Horizontal Sharding further splits tables across databases.

Figure 3: Database Sharding

4. Horizontal Splitting

Service layering turns system services into modular building blocks, separating functional and non‑functional components and combining them as needed (e.g., front‑stage composition of product image, price, stock, coupons, recommendations).

Database can also separate hot and cold data; archival of obsolete items reduces storage pressure, while recent data remains readily accessible.

Architectural Evolution

As system complexity and performance demands increase, internal architecture upgrades become necessary.

Early systems directly linked applications to databases; after splitting, services depend on remote calls.

Figure 4: Early Application Architecture

Performance pressures lead to the introduction of caches and indexes; typical solutions combine key‑value caches (e.g., Redis) with search indexes (e.g., Solr, Elasticsearch) to handle hot data efficiently.

In a 2014 upgrade handling 300 million hot records, the stack used Solr for indexing and Redis for caching, storing only primary keys in Solr and full records in Redis with expiration, falling back to the database on cache miss. Similar patterns now use ES + HBase.

Figure 5: Adding Cache and Index

Frequently accessed data can be placed in JVM cache (e.g., category information) to reduce remote calls.

ThreadLocal can serve as a thread‑local cache, but requires careful eviction and validity handling.

When updating product information, validation often requires fetching the product repeatedly; thread‑local caching reduced validation time by ~20 ms and saved nearly ten thousand reads per minute.

Figure 6: Adding Local Cache

To mitigate unstable third‑party services, treat dependent services as data sources with their own caches, reducing external risk (e.g., caching merchant info for the product service).

Figure 7: Remote Service as Data Source

Increasing user‑experience expectations drive asynchronous processing via message middleware; for example, order creation can be decoupled by sending a message to the order service while immediately returning a payment page.

The business layer can be divided into foundational and composite services, while the data layer consists of data sources and indexed caches; appropriate middleware must be integrated to address various system challenges.

Figure 8: Complex Structure

Conclusion

System architecture gradually becomes more complex, improving stability and robustness; technology choices must align with business pain points, technical expertise, and resource constraints to avoid unrealistic solutions.

The above summarizes recent technical transformations and upgrades; future posts may dive deeper into specific topics.

System splitting ultimately leads to micro‑services, and architectural evolution reflects technological advancement.

architecturemicroservicescachinghorizontal scalingsystem splittingvertical sharding
JD Tech
Written by

JD Tech

Official JD technology sharing platform. All the cutting‑edge JD tech, innovative insights, and open‑source solutions you’re looking for, all in one place.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.