How to Evolve a Monolith into an Event‑Driven Microservices Architecture
This comprehensive guide walks you through evolving a monolithic e‑commerce application into a highly available, scalable, low‑latency, event‑driven microservices system capable of handling millions of requests, covering design patterns, principles, best practices, and technology choices.
This article introduces design patterns, principles, and best practices for microservice architecture design, using appropriate architectural patterns and technologies.
Through this article you will learn how to evolve step‑by‑step from a monolithic architecture to an event‑driven microservices architecture, and how to design a high‑availability, highly scalable, low‑latency system resilient to network failures that can handle millions of requests.
Architecture Design Journey
We will start from basic software architecture design and create a simple e‑commerce application using a monolithic architecture that can handle a small number of requests.
Article Content Organization
This article contains both theoretical knowledge and practical information:
We will learn each specific pattern, why it should be used, and where it should be applied.
Then we will look at reference architectures that apply these patterns.
Next we will combine the newly learned patterns to design our architecture.
Finally we will decide which technologies to use to implement the architecture.
Therefore, we will iteratively design the architecture, gradually evolving from a monolith to an event‑driven microservices architecture.
Architecture Evolution
We will evolve the architecture based on the following questions:
How do we scale the application?
How many requests does the application need to handle?
What latency can the architecture tolerate?
Thus we improve the architecture from several aspects:
Scalability and reliability measure the level of service an application can provide to end users. If our e‑commerce application can serve millions of users without noticeable downtime, we can say the system is highly scalable and reliable. Scalability and availability are primary factors to consider in a well‑designed architecture.
Scalability = the e‑commerce application should be able to serve millions of users.
Availability = the e‑commerce application should be 24/7 available.
Maintainability = the e‑commerce application should be able to evolve for many years.
Efficiency = the e‑commerce application's response latency should be within an acceptable range, e.g., less than 2 seconds (low latency).
Requests per Second and Acceptable Latency
Now let’s look at acceptable latency. As the number of users grows, how can we keep latency within an acceptable range? See the table below:
From the table we can see that our e‑commerce application starts as a small app with 2K concurrent users and 500 requests per second. We will design the architecture based on the expected scale.
Later, as the business grows, it will need more resources to handle larger request volumes, and you will see how we evolve the architecture according to these numbers.
Monolithic Architecture
Decades of software development have produced many methods and patterns, each with its own advantages and challenges.
Therefore, we will start by understanding existing methods to design the e‑commerce architecture and gradually migrate to the cloud.
To understand cloud‑native microservices, we need to understand what a monolithic application is and how to migrate from a monolith to a microservices architecture.
Most legacy applications are primarily implemented as monoliths.
If all functionality resides in a single codebase, the application is a monolith. In a monolithic pattern, the UI, business logic, and data access all live in the same repository.
All concerns are packaged into a single deployment. Even a monolith can be layered into presentation, business, and data layers, then deployed as a single JAR/WAR file.
Monolithic approaches have many advantages, which we will discuss in an upcoming video. Here are the main pros and cons.
Pros include easy code checkout, simple debugging across modules, and straightforward vertical scaling.
Cons include:
The codebase grows large over time and becomes hard to manage.
Parallel development on the same codebase is difficult.
Adding new features to a large legacy monolith is challenging.
Any change requires redeploying the entire application.
As you can see, we understand monolithic architecture.
When to Use a Monolith
Although monoliths have many drawbacks, they are still a good choice for small applications because they are simple to build, test, and deploy, and vertical scaling is easy and fast.
Compared to microservices, which require experienced developers to identify and develop services, monoliths are simpler to develop and deploy.
Monolithic Architecture Design
In this section we will design our e‑commerce application step‑by‑step using a monolithic architecture, iterating based on requirements.
Functional Requirements
List products
Filter products by brand and category
Add products to cart
Apply coupon discounts and view total price
View cart and create order
List historical orders and order items
Non‑Functional Requirements
Scalability
Handle increased concurrent users
Additionally, it is best to add rules to the architecture diagram to avoid forgetting constraints.
Principles
KISS
YAGNI
We will consider these rules when designing the architecture.
As you can see, we have designed the e‑commerce application using a monolithic architecture.
We added a large E‑Commerce box that contains the store UI, catalog service, SC service, discount service, and order service—all packaged as a single artifact in the container.
The monolith has a huge codebase containing all modules. Adding a new module requires modifying the existing code and redeploying the artifact to a Tomcat server. For simplicity we follow the KISS principle.
We will refactor the design based on requirements and iterate.
Monolith Scalability
The diagram shows we added two application servers, performed horizontal scaling, and placed a load balancer between the client and the e‑commerce application.
In a monolith, to achieve scaling we add more e‑commerce servers and put a load balancer in front of them.
The load balancer receives requests and uses consistent hashing to distribute them evenly across servers.
Technology Stack Adaptation
Now we look at technology options—adapting the stack.
The diagram shows potential options for the e‑commerce monolith: NGINX as a load balancer and Java‑Oracle as a standard implementation.
Microservice Architecture
Microservices are small business services that can be independently deployed and work together.
Microservice architecture style is a way of developing a single application as a set of small services, each running in its own process and communicating via lightweight mechanisms, typically HTTP or gRPC APIs.
Thus, microservice architecture is a cloud‑native approach where applications consist of many loosely coupled, independently deployable components.
Microservices
— Have their own tech stack, including database and data model;
— Communicate via REST APIs, event streams, or message brokers;
— Organized by business capability; service boundaries are often called bounded contexts.
In upcoming sections we will see how to use bounded contexts to decouple microservices.
Microservice Characteristics
Microservices are small, independent, and loosely coupled. A small development team can build, test, and deploy a service. Each service has its own codebase managed by a small team.
Services can be deployed independently; teams can update a service without rebuilding the whole application.
Services own their data persistence, unlike traditional monoliths where a single data layer handles persistence.
Benefits of Microservice Architecture
Agility
The most important characteristic is that services are small and can be deployed independently.
Small, focused teams
A microservice should be small enough for a single team to build, test, and deploy it.
Scalability
Microservices can be scaled independently; you can scale a specific service without scaling the entire application.
Challenges of Microservice Architecture
Complexity
Many services must work together, leading to more moving parts than a monolith.
Network issues and latency
Because services are small and communicate over the network, network problems must be managed.
Data consistency
Each service persists its own data, making consistency a challenge.
Microservice Architecture Design
In this section we will design a microservice architecture step‑by‑step, iterating based on requirements.
We follow the "service‑own‑database" pattern. Each microservice has its own database, allowing hybrid persistence: Product service may use a NoSQL document store, SC service a NoSQL key‑value store, Order service a relational database.
Architecture Evolution
Let’s look at this microservice diagram and consider what is missing, what pain points exist, and how to improve scalability, availability, and concurrency support.
The UI communicates directly with microservices, which is hard to manage. We should focus on microservice communication.
Microservice Communication
When migrating to microservices, the biggest challenge is the change in communication mechanisms. Because microservices are distributed, they communicate via network protocols such as HTTP, gRPC, or message brokers.
Therefore, services must use inter‑service communication protocols like HTTP, gRPC, or AMQP.
Since microservices have complex structures and are independently developed and deployed, careful consideration of communication types is required during design.
API‑Gateway Pattern
If you want to build a large application with multiple client apps based on microservices, it is recommended to use the API‑gateway pattern.
This pattern provides a reverse proxy that routes requests to internal microservice endpoints. The API gateway offers a single entry point for clients and handles routing, aggregation, authentication, SSL termination, and caching.
API‑Gateway Design
We will iterate the e‑commerce architecture by adding an API gateway.
The diagram shows that client requests are collected at a single entry point and routed to internal microservices.
The gateway handles client requests, provides internal routing, aggregates multiple microservice responses, and manages cross‑cutting concerns such as authentication, rate limiting, and throttling.
Backends‑for‑Frontends (BFF) Pattern
Using a single complex API gateway creates a single point of failure and can become a bottleneck. The BFF pattern solves this by providing separate API gateways for each client type (mobile, web, desktop), reducing coupling and improving resilience.
Thus, we will create multiple API gateways based on client boundaries.
Internal Microservice Communication
All client synchronous requests go through the API gateway, but internal microservices may still need to call each other. Reducing inter‑service calls is best practice, but some use cases require multiple internal calls.
For example, a checkout operation may trigger a chain of six synchronous HTTP calls, increasing latency and risking failures.
To address this, we can either make inter‑service communication asynchronous using a message broker, or use a service aggregation pattern to combine queries into a single API call.
Service Aggregation Pattern
The service aggregation pattern receives a client request, distributes it to multiple backend services, merges the results, and returns a single response, reducing communication overhead.
We will iterate the architecture by adding service aggregation and service registration patterns.
The diagram shows the application now incorporates service aggregation and service registration.
Asynchronous Messaging in Microservices
If communication involves only a few services, synchronous calls are fine. When many services need to interact and some operations are long‑running, asynchronous messaging should be used.
Otherwise, inter‑service dependencies and coupling cause bottlenecks and serious architectural problems.
Event‑driven communication relies on events and is often implemented with a publish‑subscribe message broker.
Publish‑Subscribe Design Pattern
Publish‑subscribe is a messaging pattern where publishers send messages to a broker without knowing the subscribers, and subscribers receive only the messages they are interested in.
We will add a publish‑subscribe message broker to enable asynchronous microservice communication.
Potential brokers include Kafka and RabbitMQ.
Microservice Data Management
In a monolith, querying different entities is easy because a single database handles all data, and ACID transactions simplify consistency.
In microservices, each service may have its own database (relational or NoSQL). Managing data across services requires patterns and principles.
CQRS Design Pattern
CQRS (Command Query Responsibility Segregation) separates read and write models, often using two physically separate databases. This improves performance for read‑heavy workloads and allows custom read models.
Materialized view pattern is a good way to implement the read database, providing pre‑computed, denormalized data for fast queries.
Event Sourcing Pattern
Event sourcing stores every state‑changing event in an event store, which becomes the source of truth. The read database builds materialized views by consuming these events, often via a publish‑subscribe broker.
When a user creates or updates an order, we write to a relational database; when the user queries orders, we read from a NoSQL database that is kept in sync via publish‑subscribe.
We plan to use SQL Server as the relational write store, Cassandra as the NoSQL read store, and Kafka topics to synchronize them.
Event‑Driven Microservice Architecture
Event‑driven microservice architecture uses events for all inter‑service communication, providing asynchronous behavior and loose coupling.
The event hub acts as a large event store capable of real‑time processing.
Event‑Driven Architecture Design
All communication goes through the event hub, which can be considered a real‑time processing database.
The architecture can meet target concurrency with low latency, providing high scalability and high availability.
Thus, we have completed the design of an e‑commerce microservice architecture that incorporates all the discussed design principles and patterns.
Source: https://www.infoq.cn/article/6dlQZisMiXK3hzLIwEET
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
ITFLY8 Architecture Home
ITFLY8 Architecture Home - focused on architecture knowledge sharing and exchange, covering project management and product design. Includes large-scale distributed website architecture (high performance, high availability, caching, message queues...), design patterns, architecture patterns, big data, project management (SCRUM, PMP, Prince2), product design, and more.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
