From Monolith to Microservices: A Real‑World Journey and Lessons Learned
This article walks through the evolution of a simple online supermarket from a monolithic website to a fully split microservice architecture, highlighting the challenges encountered—such as code duplication, database bottlenecks, and operational complexity—and presenting practical solutions like service decomposition, monitoring, tracing, gateway control, service discovery, circuit breaking, rate limiting, testing strategies, and the use of service meshes.
As a website grows, monolithic applications can no longer meet performance and scalability requirements, prompting a transition to distributed microservice architecture.
Initial Requirements
A few years ago, Xiao Ming and Xiao Pi built a simple online supermarket with a public website for browsing and purchasing products and a separate admin backend for managing users, products, and orders.
The initial implementation was straightforward, but rapid business growth introduced new demands such as promotional activities, mobile apps, and data analysis.
Growing Pains
Duplicated business logic across the website and mobile apps.
Inconsistent data access—sometimes via shared databases, sometimes via API calls, leading to tangled dependencies.
Services expanding beyond their original responsibilities, blurring boundaries.
Performance bottlenecks in the admin backend after adding analytics and promotion features.
Shared database schema preventing refactoring and causing contention.
Overall difficulty in development, testing, deployment, and maintenance, with frequent release windows at night.
Team friction over ownership of common functionalities.
These issues forced rapid, ad‑hoc development and resulted in a fragile system.
Time for Change
Recognizing the problems, Xiao Ming and Xiao Hong abstracted common business capabilities into independent services: User, Product, Promotion, Order, and Data‑Analysis services.
Each backend now consumes these services, eliminating redundant code and leaving only thin controllers and front‑ends.
Partial Service Split
Even after separating services, a shared database remained, preserving many monolithic drawbacks such as performance bottlenecks and schema coupling.
Full Service and Data Isolation
The team further split databases, giving each service its own persistence layer and introducing a message‑queue for real‑time communication.
This allowed heterogeneous technologies—for example, a data‑warehouse for analytics and caching for high‑traffic services.
Monitoring – Detecting Fault Signs
To catch issues early, a monitoring stack was built: each component exposes a metrics endpoint, Prometheus scrapes these metrics, and Grafana visualizes them and sends alerts.
Tracing – Locating Problems
Link tracing records a traceId, spanId, parentId, request and response times in HTTP headers. The team adopted Zipkin (an open‑source Dapper implementation) and added an interceptor to inject tracing data and forward logs.
Log Analysis
Log volume grew beyond manual inspection, so the ELK stack (Elasticsearch, Logstash, Kibana) was deployed. Services write logs to files; lightweight agents ship them to Logstash, which indexes them in Elasticsearch for fast searching via Kibana.
Gateway – Access Control and Service Governance
A gateway sits between callers and services, handling authentication, authorization, and routing. The team chose a coarse‑grained approach: one gateway for all external traffic, internal calls remain direct.
Service Registration & Discovery – Dynamic Scaling
Instances register themselves with a discovery service (e.g., Consul, Eureka). Clients fetch the current list of service endpoints, enabling automatic scaling and health‑checking without manual load‑balancer updates.
Circuit Breaking, Service Degradation, and Rate Limiting
Circuit breakers stop cascading failures by short‑circuiting unresponsive services. Non‑critical services can be degraded to preserve core functionality. Rate limiting protects downstream services from overload, optionally scoped per caller.
Testing Strategy
End‑to‑end tests covering critical user flows.
Service‑level tests with mocked dependencies.
Unit tests for individual code units.
Because end‑to‑end tests are costly, they focus on core features; failures are traced back to unit tests for rapid regression detection.
Microservice Framework
To avoid repetitive integration code, the team built a lightweight framework that provides metric endpoints, tracing hooks, log forwarding, service registration, and routing. However, framework upgrades require coordinated updates across all services.
Service Mesh Alternative
Instead of embedding code, a sidecar proxy (e.g., Envoy) can handle networking, telemetry, and security. The data plane (proxies) works with a control plane that distributes configuration. Service meshes are non‑intrusive but add latency and operational complexity.
Conclusion
Microservices are not the final destination; future directions include serverless, FaaS, and even a resurgence of monoliths. Nonetheless, the migration described provides a practical roadmap for evolving a simple application into a resilient, observable microservice system.
Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.