Microservice Governance: System Classification, Evolution, and Dependency Analysis
This article explains how to classify microservice systems, trace their architectural evolution, and conduct thorough dependency analysis to identify strong and weak couplings, enabling reliable governance and disaster‑recovery planning for high‑traffic events such as 618 and Double‑11.
Abstract: Microservice governance faces challenges such as network latency, distributed transactions, and asynchronous messaging. The key is to identify communication dependencies and determine whether they are strong or weak.
Microservices split a monolithic application into independent processes that communicate via RPC or HTTP. Each service has its own data store, business logic, and deployment pipeline, which introduces new operational and governance concerns compared with code‑level dependencies.
1. System Classification and Evolution
1.1 System Classification
Based on functionality, systems can be divided into three categories:
Interface service systems : expose external APIs (e.g., JSF, HTTP, Hessian). Write APIs must be idempotent and protected against abuse.
Web page systems : render user‑facing pages; data may originate from multiple sources and may need merging.
Task systems : perform jobs such as statistics or data synchronization; they require distributed scheduling, resource allocation, and accurate computation.
1.2 System Evolution
The architecture has shifted from early monoliths to modern microservice‑based systems, changing the nature of governance from code dependencies to communication dependencies.
2. Clarifying the Purpose of the Review
Before major sales events (e.g., 618, Double‑11), a comprehensive review of all systems is performed to locate weak points. The goal is to focus on the most critical (golden) functions and processes, applying the 80/20 principle while still handling less critical parts through throttling or degradation.
3. How to Conduct the Review
Identify all system functions, distinguish core (golden) functions, and then break each core function into its workflow nodes. For each node, classify dependencies as strong (must be available, requiring disaster‑recovery plans) or weak (can be degraded).
3.1 Interface Service Systems
List all provided APIs, identify golden APIs, and ensure they are non‑degradable by provisioning redundant resources (e.g., multi‑datacenter Redis clusters).
3.2 Web Page Systems
Critical pages such as home, category, and navigation must always display content; they require multi‑level caching and fallback data to avoid a blank page.
3.3 Task Systems
Task systems need distributed workers; solutions include Zookeeper‑based scheduling or open‑source frameworks like Elastic‑Job.
3.4 Core‑Function Process Analysis
After identifying golden functions, map their end‑to‑end processes and highlight key nodes. Use color coding (e.g., deep yellow for strong dependencies, light green for weak dependencies) to indicate which resources require disaster‑recovery plans.
4. Conclusion
By classifying systems, tracing their evolution, and meticulously analyzing core functions and processes, organizations can pinpoint golden functionalities and establish robust disaster‑recovery strategies for strong dependencies, ensuring reliable service governance during high‑traffic events.
Architecture Digest
Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.