Why Measure Software Architecture and Which Metrics to Use
The article explains the importance of measuring software architecture, outlines the granularity of metrics from code to infrastructure, and provides concrete measurement indicators for code implementation, component design, architecture design, and runtime infrastructure to guide effective architecture governance.
1. Why Measure Software Architecture
Effective metric measurement helps locate problems in software‑architecture governance and R&D efficiency, much like a doctor uses diagnostic indicators to identify a disease. Metrics act as a switch: when they are monitored, they reveal hidden issues and enable the assessment of improvement effects.
If you cannot measure it, you cannot manage it. — Peter Drucker
Metrics allow architects to diagnose architectural health, especially as system size and complexity exceed what a single person can comprehend.
2. What Should Be Measured When Evaluating Software Architecture
Based on Mamdouh Al‑Enezi’s paper *Software Architecture Quality Measurement Stability and Understandability*, three granularity levels are identified: package/class/method, component/library, and overall architecture. Building on a previous article that split architecture optimisation into four directions—code implementation, component design, architecture design, and infrastructure—these granularity levels are extended to include runtime concerns.
3. Metrics for Measuring Software Architecture
Because a single universal metric set is unrealistic, appropriate metrics must be chosen according to business goals and the specific architectural concerns of a system. The following tables list practical indicators for each of the four optimisation dimensions.
Metric
Calculation Method
Explanation
Cyclomatic Complexity
PMD cyclomatic‑complexity rule
High values indicate many decision points, making the function hard to understand and maintain.
Large Class Ratio
#Large classes / total code size
Large classes handle too many responsibilities, leading to tight coupling and poor extensibility.
Large Function Ratio
#Large functions / total code size
Same issue as large classes, but at the function level.
Duplicate Code Frequency
Number of duplicated code fragments
Duplication scatters logic, increasing maintenance difficulty and risk of inconsistent changes.
Loop‑External Component Calls
Frequency of database or third‑party API calls inside loops
Such calls can cause severe performance problems.
Test Coverage
Percentage of code covered by automated tests
Low coverage leaves existing functionality unprotected during changes.
3.2 Component‑Design Metrics
Metric
Calculation Method
Explanation
Cyclic Dependency Count
Number of cyclic dependencies between classes
Violates the unidirectional‑dependency principle, making impact analysis difficult.
Class Stability
Refer to Bob Martin’s stability metrics (chapter 20 of *Agile Software Development*).
Stable classes are hard to change but heavily reused; instability should increase along dependency direction.
Dependency Anomaly Count
Number of layering‑rule violations within the process
Similar to cyclic dependencies, indicating architectural drift.
Component‑Level Duplicate Code Frequency
Occurrences of duplicated functionality across components
Duplicated code across components raises maintenance cost.
3.3 Architecture‑Design Metrics
Metric
Calculation Method
Explanation
Component Cyclic Dependency Count
Number of cyclic dependencies between components
Indicates tangled responsibilities and unclear boundaries.
Component Responsibility Deviation Rate
Deviation between component responsibilities and business model
High deviation means components are not aligned with domain logic.
Component Stability
Same calculation as class stability, applied to components
Reflects how resistant a component is to change.
Call‑Chain Length
Number of hops an API traverses across components
Longer chains increase dependency depth and debugging difficulty.
3.4 Infrastructure (Runtime) Metrics
Metric
Calculation Method
Explanation
Infrastructure Load Rate
Utilisation of resources such as DB and cache
Shows whether the system is approaching a bottleneck.
Average Response Time
Mean response time of production‑environment APIs
Direct indicator of architectural performance.
System Availability
Proportion of time the system is online
Measures overall stability and quality.
Mean Time to Recovery (MTTR)
Average time to restore service after a failure
Reflects the architecture’s resilience to incidents.
Data Inconsistency Ratio
Share of data‑inconsistency issues among all problems
Often caused by poor design or improper component interaction.
Useless Feature Ratio
Proportion of features unused for a long period in production
Unused APIs increase cognitive load and should be removed.
4. Beyond Metrics
Metrics are essential for architecture governance, but they are only references. Experienced engineers must interpret the numbers, combine them with domain knowledge, and sometimes act even when metrics look normal. Some important aspects of architecture cannot be quantified, yet they remain critical for successful system evolution.
References: [1] "Architecture Optimisation Directions" – https://www.maguangguang.xyz/architecture-optimization-topics [2] ISO/IEC 25010 – Systems and Software Quality Requirements and Evaluation (SQuaRE) [3] Cyclic Dependency – https://www.maguangguang.xyz/eliminate-cyclic-dependency [4] System Availability – https://www.maguangguang.xyz/how-to-improve-system-availability [5] Data Consistency – https://www.maguangguang.xyz/data-consistency
DevOps
Share premium content and events on trends, applications, and practices in development efficiency, AI and related technologies. The IDCF International DevOps Coach Federation trains end‑to‑end development‑efficiency talent, linking high‑performance organizations and individuals to achieve excellence.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.