System Splitting and Architectural Evolution: Strategies for Scaling and Robustness
The article explains how increasing business complexity and system throughput drive the need for system splitting, decoupling, and architectural evolution—covering horizontal scaling, vertical and business splitting, database sharding, caching, and the transition to micro‑services to improve capacity, stability, and performance.
As business complexity increases and system throughput grows, unified deployment becomes more difficult, and interdependent modules make the system heavy and fragile. Therefore, business needs to be split, systems decoupled, and internal architecture upgraded to improve capacity and robustness. The following introduces two parts: system splitting and structural evolution.
1. System Splitting
System splitting, from a resource perspective, includes application splitting and database splitting. In terms of order, it can be horizontal scaling, vertical splitting, business splitting, and horizontal splitting.
Figure 1 System Decomposition Principles
1) Horizontal Scaling
Horizontal scaling is the initial solution and the first choice when the system hits a bottleneck, mainly expanding in two ways: adding instances to applications and clustering to increase throughput; using master‑slave database replication for read/write separation, as the database is the most critical resource.
2) Vertical Splitting
Vertical splitting truly begins to split the system, separating by business functions such as user system, product system, transaction system, etc. Service call governance is introduced to handle dependencies between sub‑systems, improving decoupling and stability; degradation mechanisms prevent cascading failures.
Business‑related databases are also split into user DB, product DB, transaction DB, etc.
3) Business Splitting
Business splitting targets the application layer, separating features such as shopping cart, checkout, order, flash‑sale, etc. For flash‑sale systems, product information can be pre‑loaded into JVM cache to reduce external calls and improve performance, relieving pressure on the product system.
Database splitting steps: vertical partitioning, vertical sharding, horizontal partitioning, horizontal sharding.
Figure 2 Product Table Splitting
Vertical sharding means splitting a large table into smaller tables based on field update or query frequency; vertical sharding of databases means splitting by business (e.g., order DB, product DB, user DB); horizontal partitioning splits a large table into many tables; horizontal sharding further splits tables.
Figure 3 Sharding and Partitioning
4) Horizontal Splitting
Service layering turns system services into building blocks, separating functional and non‑functional systems; the middle‑platform acts as a component library, while the front‑end composes these blocks to quickly respond to business changes, e.g., a product page showing main image, price, stock, coupons, and recommendations.
Database can also separate hot and cold data; archival of obsolete items, and older order data can be stored differently.
2. Structural Evolution
Structural evolution is required as system complexity and performance demands increase, prompting internal architecture upgrades. Early systems directly linked applications to databases; after splitting, functions rely on other systems via remote calls.
Figure 4 Early Application Structure
Increasing performance demands make databases a bottleneck, leading to the introduction of cache and indexes to handle key‑value and complex queries. Cache + index is a basic high‑concurrency solution, with variations in implementation.
In 2014, a system with 300 million hot records was upgraded using Solr + Redis; only indexes were stored in Solr, results stored as primary keys, then fetched from Redis. Redis stored only hot data with expiration; cache miss fell back to the database and wrote back to Redis. This balances resources and performance. Nowadays ES + HBase is also popular.
Figure 5 Adding Cache and Index
Frequently used data can be placed in JVM cache; for example, category information in e‑commerce. ThreadLocal can be used for thread‑local cache, but must manage eviction and validity.
Figure 6 Adding Local Cache
When dependent services are unstable, treat them as data sources with caching to reduce risk, e.g., caching merchant info to isolate the product system from merchant service instability.
Figure 7 Remote Service Evolves to Data Source
User experience demands low latency; asynchronous processing via message middleware is effective. In e‑commerce order flow, the front‑end returns the payment page while the order system saves data asynchronously.
Business layer can be divided into basic services and composite services; data layer into data sources and index caches. Technologies and middleware must be combined effectively to solve various system problems.
Figure 8 Complex Structure
3. Conclusion
System architecture becomes increasingly complex, improving stability and robustness; technology choices must align with business pain points, technical reserves, and resource constraints, otherwise they become unrealistic.
The above summarizes recent technical transformations and upgrades; future detailed sharing on specific points is possible. System splitting ultimately leads to micro‑services; structural evolution reflects technological upgrades.
IT Architects Alliance
Discussion and exchange on system, internet, large‑scale distributed, high‑availability, and high‑performance architectures, as well as big data, machine learning, AI, and architecture adjustments with internet technologies. Includes real‑world large‑scale architecture case studies. Open to architects who have ideas and enjoy sharing.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.