Backend Development 14 min read

Evolution of System Architecture: From LAMP to Distributed Services and Service Governance

This article outlines the progressive evolution of system architecture—from a single‑server LAMP setup through service‑data separation, caching, clustering, read/write splitting, CDN, distributed databases, NoSQL, business splitting, and finally distributed services with messaging, service frameworks, service bus, communication patterns, and governance—highlighting the motivations, characteristics, and challenges at each stage.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
Evolution of System Architecture: From LAMP to Distributed Services and Service Governance

Note: Architecture determines a system's stability, scalability, and concurrency; its evolution is a continuous improvement from simple to complex, accumulating experience and technological refinement.

Initial Stage Architecture

In the initial stage, all resources such as applications, databases, and files reside on a single server, commonly referred to as LAMP.

Feature: All resources are on one server.

Description: Typically Linux as the OS, PHP for the application, Apache as the web server, and MySQL as the database, using free open‑source software on a cheap server to start system development.

Application Service and Data Service Separation

When traffic spikes, the web server becomes a bottleneck, prompting the addition of another web server.

Feature: Applications, databases, and files are deployed on independent resources.

Description: Growing data volume exceeds the capacity of a single server, so separating application and data improves concurrency handling and storage capacity.

Using Cache to Improve Performance

Feature: Frequently accessed data is stored in a cache server, reducing database hits and pressure.

Description: According to the 80/20 rule, 80% of requests target 20% of data. Cache can be local (fast but limited) or distributed (larger but shared with application memory).

Using Application Server Cluster

After sharding databases, the system still slows down due to web server bottlenecks; multiple servers behind a load balancer provide simultaneous external service.

Feature: Multiple servers handle requests in parallel, overcoming single‑server limits.

Description: Clustering is a common solution for high concurrency and massive data, allowing resources to be added to increase processing capacity.

Database Read‑Write Separation

Even after scaling, write and update operations cause resource contention, slowing the system.

Feature: Multiple servers provide services via load balancing, solving single‑server capacity limits.

Description: Clustering continues to address high‑concurrency demands by adding resources.

Reverse Proxy and CDN Acceleration

Feature: CDN and reverse proxy accelerate access speed.

Description: To handle diverse network environments and regional users, CDN and reverse proxy cache content, speeding up access and reducing backend load.

Distributed File System and Distributed Database

As data volume grows, sharding alone becomes insufficient; tables are also partitioned.

Feature: Databases become distributed; file systems become distributed.

Description: A single powerful server cannot meet the needs of large‑scale systems; eventually a distributed database and file system are required, with business‑level sharding often preferred.

Using NoSQL and Search Engine

Feature: The system introduces NoSQL databases and search engines.

Description: Complex business logic demands flexible storage and retrieval; non‑relational databases and search technologies are adopted, accessed via a unified data‑access layer.

Business Splitting

Feature: The system is refactored by business lines, deploying application servers per business.

Description: To cope with complex scenarios, the system is divided into independent products; communication can be via hyperlinks, message queues, or shared data stores. Vertical splitting creates separate web applications; horizontal splitting extracts reusable services for distributed deployment.

Distributed Services

Feature: Common modules are extracted and deployed on distributed servers for application servers to call.

Description: As services proliferate, database connections become a bottleneck, leading to resource exhaustion and potential denial of service.

Problems Faced by Distributed Services

(1) Managing an increasing number of service URLs becomes difficult, and hardware load balancers become single points of pressure.

(2) Service dependencies grow complex, making startup order and architecture description hard to track.

(3) Rising request volume reveals capacity limits; it is unclear how many machines are needed or when to scale.

(4) Communication overhead increases; identifying responsible owners and parameter contracts becomes challenging.

(5) Multiple business consumers per service raise quality‑of‑service concerns.

(6) Upgrades can cause unexpected failures (e.g., cache errors leading to OOM); fault isolation, degradation, and resource throttling are needed.

Java Distributed Application Technical Foundations

(Image illustrating foundational concepts)

Key Technologies in Distributed Services: Message Queue Architecture

(Image)

Message queues decouple systems by passing messages; different subsystems process the same message.

Key Technologies in Distributed Services: Message Queue Principle

(Image)

Key Technologies in Distributed Services: Service Framework Architecture

(Image)

The service framework separates system coupling via interfaces; subsystems interact through a common interface. It suits homogeneous systems such as mobile, web, and external integrations.

Key Technologies in Distributed Services: Service Framework Principle

(Image)

Key Technologies in Distributed Services: Service Bus Architecture

(Image)

Like the service framework, the service bus decouples systems via interfaces, using a bus model suitable for internal heterogeneous systems.

Key Technologies in Distributed Services: Service Bus Principle

(Image)

Distributed System Interaction: Five Communication Patterns

Request/Response (synchronous): client blocks until server replies.

Callback (asynchronous): client sends RPC, server processes and calls back to client’s endpoint.

Future: client receives a Future object and blocks on .get() when result is needed.

Oneway: client fires request and continues without waiting for a response.

Reliable: messages are persisted and retried until successfully delivered.

Implementation of Communication Modes

Synchronous Point‑to‑Point Service Mode

(Image)

Asynchronous Point‑to‑Point Message Mode 1

(Image)

Asynchronous Point‑to‑Point Message Mode 2

(Image)

Asynchronous Broadcast Message Mode

(Image)

Service Governance in Distributed Architecture

Service governance is the core function of service frameworks/bus, ensuring high‑quality service provision by defining agreements between providers and consumers, controlling traffic, limiting malicious access, and rejecting excess load.

Based on Dubbo, governance includes service management (listing, upgrading, downgrading, disabling, weight adjustment) and monitoring (requests per second, latency, peak usage) to guide cluster planning and performance tuning.

Dubbo‑Based Service Governance: Service Routing

(Image)

Dubbo‑Based Service Governance: Service Protection

(Image)

OSB‑Based Service Governance

(Images illustrating OSB governance)

Note: The content is sourced from the internet, author anonymous, for learning reference only; copyright belongs to the original author.

System ArchitectureBackend Developmentload balancingcachingservice governancedistributed services
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.