From Monolith to Microservices: A Practical Evolution Blueprint
This article walks through the step‑by‑step transformation of a simple online supermarket from a single‑node monolith to a fully fledged microservice architecture, highlighting the motivations, common pitfalls, component choices, monitoring, tracing, logging, resilience patterns, testing strategies, and the trade‑offs of frameworks versus service mesh.
Why Move From Monolith to Microservices?
As a website grows, a single‑unit application often cannot meet performance, scalability, and organizational needs, prompting a shift toward a distributed, microservice architecture.
Initial Monolithic Setup
Two founders, XiaoMing (developer) and XiaoPi (business), launch an online supermarket with a simple feature list: a public website (user registration, login, product browsing, ordering) and an admin backend (user, product, order management). The whole system is deployed on a single cloud instance and works well at first.
Growth Triggers New Requirements
Promotional campaigns (discounts, coupons, etc.)
Mobile channels (apps, mini‑programs)
Data‑driven personalization
To meet these, the team quickly adds new modules to the admin backend and creates a separate mobile app without proper planning, resulting in duplicated code, tangled API calls, oversized services, shared database bottlenecks, and deployment headaches.
First Refactoring: Extracting Core Services
The team abstracts common business capabilities into independent services: User, Product, Promotion, Order, and Data‑Analysis. Each service now provides a thin API layer for the front‑ends, eliminating most duplicated logic.
At this stage the database is still shared, so some monolithic drawbacks remain: performance bottlenecks, schema coupling, and risk of cascading failures.
Second Refactoring: Database Splitting & Message Queue
Each service receives its own isolated persistence layer. The Data‑Analysis service may use a data‑warehouse, while Product and Promotion services add caching for high‑frequency reads. A message‑queue is introduced to improve real‑time communication.
Operational Concerns
Monitoring
To detect early failure signs, each component exposes a uniform /metrics endpoint. Prometheus scrapes these endpoints, and Grafana visualises the data (CPU, memory, request latency, error rates, etc.). Open‑source exporters such as RedisExporter and MySQLExporter are used.
Tracing
Distributed tracing records traceId, spanId, parentId, and timestamps in HTTP headers. The team adopts Zipkin (an open‑source Dapper implementation) and injects a lightweight interceptor into each service to forward spans.
Log Analysis
When log volume grows, the ELK stack (Elasticsearch, Logstash, Kibana) is deployed. Services write logs to files; Logstash agents ship them to Elasticsearch, and Kibana provides searchable dashboards.
Gateway & Service Governance
A central API gateway enforces authentication, rate‑limiting, and routing. The team chooses a coarse‑grained approach: one gateway per service cluster, keeping intra‑cluster calls direct.
Service Discovery & Dynamic Scaling
Instances register themselves with a discovery service (e.g., Consul, Eureka, etcd). Clients pull the address list and perform client‑side load balancing, allowing seamless scaling up or down.
Resilience Patterns
Circuit Breaker : After repeated failures, stop calling the faulty service and return an error immediately.
Service Degradation : Disable non‑critical features (e.g., recommendation) when their downstream services are unavailable.
Rate Limiting : Reject excess requests per time window, optionally per caller, to protect overloaded services.
Testing Strategy
End‑to‑end tests for core user journeys.
Service‑level tests using mock servers for dependent services.
Unit tests covering individual code units.
Microservice Framework vs. Service Mesh
The team builds a lightweight in‑house framework that injects metrics, tracing, and health‑check code into each service, but framework upgrades become costly. As an alternative, a Service Mesh (e.g., Istio) adds a sidecar proxy to each pod, handling traffic, security, and observability without code changes, at the expense of additional latency.
Conclusion
Microservices are not a final destination; they introduce new operational complexities that require robust monitoring, tracing, logging, discovery, and resilience mechanisms. Ongoing evolution may lead to serverless or a return to monoliths, but the principles of clear service boundaries and automated governance remain essential.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Liangxu Linux
Liangxu, a self‑taught IT professional now working as a Linux development engineer at a Fortune 500 multinational, shares extensive Linux knowledge—fundamentals, applications, tools, plus Git, databases, Raspberry Pi, etc. (Reply “Linux” to receive essential resources.)
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
