From Monolith to Microservices: A Practical Evolution Guide
This article walks through the step‑by‑step transformation of a simple online supermarket from a monolithic web app to a fully‑featured microservice architecture, covering common pitfalls, component choices, monitoring, tracing, logging, service discovery, fault‑tolerance, testing, and deployment strategies.
As a website grows, a monolithic application often cannot meet new requirements, prompting a shift to distributed microservice architecture.
Initial Requirements
A few years ago, Xiao Ming and Xiao Pi built an online supermarket with a simple website and an admin backend for user, product, and order management.
Website: user registration/login, product display, order placement
Admin backend: user management, product management, order management
The initial architecture was a single website and a separate admin backend deployed on a cloud server.
Business Growth
Rapid competition forced the team to add promotions, mobile channels, and data‑driven personalization, leading to a tangled codebase with duplicated logic, unclear service boundaries, shared databases, and deployment difficulties.
Time for Change
Recognizing these issues, they abstracted common business capabilities into shared services: User, Product, Promotion, Order, and Data‑Analysis.
Each application now consumes these services, reducing redundant code and leaving only thin controllers and front‑ends.
Service Splitting and Database Isolation
After separating services, the shared database remained a bottleneck, so they partitioned persistence per service and introduced a message queue for real‑time processing.
Monitoring – Detecting Fault Signs
They built a monitoring stack using Prometheus to scrape metrics from components (Redis, MySQL, business services) and Grafana for dashboards and alerts.
Tracing – Locating Issues
To trace request flows, they added traceId, spanId, parentId, requestTime, and responseTime headers, using Zipkin to collect and visualize call graphs.
Log Analysis
For large‑scale log handling, they adopted the ELK stack (Elasticsearch, Logstash, Kibana) with agents collecting logs from each service.
Gateway – Access Control and Service Governance
A gateway sits between callers and services to enforce permissions and provide a unified API surface.
Service Registration & Discovery – Dynamic Scaling
They deployed a service‑discovery component (e.g., Zookeeper, Eureka, Consul) so services automatically register themselves and clients obtain up‑to‑date endpoint lists.
Circuit Breaker, Degradation, and Rate Limiting
When a downstream service fails, circuit breaking prevents cascading timeouts; non‑critical services can degrade gracefully, and rate limiting protects against traffic spikes.
Testing
Testing is organized into three layers: end‑to‑end, service‑level, and unit tests, with mocks used for dependent services.
Microservice Framework
To avoid repetitive integration code, they built a shared framework handling metrics, tracing, logging, registration, routing, circuit breaking, and rate limiting.
Service Mesh Alternative
Instead of embedding code, a sidecar‑based service mesh (data plane + control plane) provides non‑intrusive traffic management, though it adds some performance overhead.
Conclusion
Microservices are not the final destination; future directions include serverless, FaaS, or even revisiting monoliths, but the transformation described offers a solid foundation for scalable, maintainable systems.
Source: https://www.cnblogs.com/skabyy/p/11396571.html
Open Source Linux
Focused on sharing Linux/Unix content, covering fundamentals, system development, network programming, automation/operations, cloud computing, and related professional knowledge.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.