R&D Management 7 min read

EPC Metric System for Software Delivery and Operations – Result and Process Indicators

The EPC metric system, authored by consulting expert Qiao Liang, outlines result‑display and process‑guiding indicators for software delivery and operations, detailing metrics across code quality, testing, CI/CD, infrastructure, and security, and provides guidance on phased, context‑aware adoption.

Continuous Delivery 2.0
Continuous Delivery 2.0
Continuous Delivery 2.0
EPC Metric System for Software Delivery and Operations – Result and Process Indicators

I am Qiao Liang, author of "Continuous Delivery 2.0" and a veteran enterprise consulting manager, sharing a set of software engineering management improvement metrics gathered from years of consulting work.

These metrics are divided into two categories: result‑display indicators that evaluate delivery and operational outcomes, and process‑guiding indicators that steer team behavior.

Result‑Display Indicator Set

Mini‑Feature Lead Time P75

User Story Delivery Time P85

Effective Defect Count

Pending Defect Count

Legacy Defect Count

Bug Rate per KLOC

Online R&D Defect Count

Online R&D Defect Resolution Time

R&D Event Count

R&D Event Average Resolution Time

Requirement Release Count

Product Code Lines

Web Vulnerability Compliance

Security Vulnerability Compliance

Development‑Stage Security Vulnerability Timely Handling

MTTR

P75 Product Request‑to‑Verification Cycle

Mobile Crash Rate

Experiment Process Exception Rate

Process‑Guiding Indicator Set

Process‑guiding indicators consist of 12 dimensions, each containing several metric items.

Requirement Collaboration

New Defect Timely Handling

Defect Operation Standardization Rate

Defect Management Process Adoption Rate

Requirement Ticket Operation Standardization Rate

Requirement Management Process Adoption Rate

Unsynced Requirement Changes

Requirement (US) Development Cycle

Code Quality

Code Commit Log Standardization

Code Commit Linked Work Items

Code Standard Examination

Security Standard Examination

Code Cyclomatic Complexity

New Code CR Coverage

Code Standard Compliance Rate

Open‑Source Governance Code Quality

Code Owner Coverage

Test Management

Recent 30‑Day Legacy Defects

Average Defects per Requirement

Requirement Acceptance Wait Time

Requirement‑Induced Defects

Automated Testing

Full Automation Coverage

Incremental Automation Coverage

Automation Stability Rate

Unit Test Full Coverage

Unit Test Incremental Coverage

Test Case Construction

Continuous Integration

CLCT

Commit Build Compliance Rate

Secondary Build Compliance Rate

Commit Build Timely Success Rate

Secondary Build Timely Success Rate

Pipeline as Code

Linear Commit Rate

Integration Environment Deployment Granularity

Pre‑Release Environment Deployment Granularity

Production Environment Deployment Granularity

Commit Build Granularity

Backend Deployment & Operations

Disaster Recovery Capability

Elastic Scheduling

Release Capability

Failure Drills

Chaos Engineering

Monitoring Capability

Backend Architecture

Name Service

Multi‑Region Disaster Recovery

External Domain Firewall Integration

Self‑Healing & Stability

Service Resource Consumption Control

Canary Verification Capability

Automatic Resource Allocation

The metrics above are supported by four dimensions—environment management, configuration management, artifact management, and product monitoring—each linked to tooling capabilities. Additional dimensions such as branch management are omitted.

How to Use the EPC Metric System

These indicators are not meant to be applied simultaneously; they should be adopted in batches based on the client’s stage‑specific goals, infrastructure maturity, and team status, under professional guidance.

Be aware that some metrics depend on the client’s software engineering infrastructure; use them progressively and avoid costly, short‑term fixes that target only a few numbers.

Because software delivery and operations (SDOP) involve many roles and a long toolchain, improving isolated process metrics may not affect result metrics. Teams should employ systems thinking, map system boundaries, and identify feedback loops to keep guiding indicators up‑to‑date.

Friendly Reminder

Measurement incurs non‑trivial cost.

When a metric becomes a target, it ceases to be a good metric (Goodhart’s Law).

Metrics will eventually be gamed (implication of Goodhart’s Law).

Improvement should not be “numbers‑only”.

R&D managementprocess improvementmetricscontinuous deliverySoftware Delivery
Continuous Delivery 2.0
Written by

Continuous Delivery 2.0

Tech and case studies on organizational management, team management, and engineering efficiency

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.