How a Unified Code Metrics Platform Boosts Development Quality and Efficiency
This article describes the design, implementation, and operational strategy of a comprehensive code‑metrics platform that standardizes coding standards, automates quality checks, and drives data‑guided improvements across multiple development teams, ultimately enhancing code reliability, maintainability, and CI/CD flow.
Background
The company’s rapid growth and influx of senior engineers from diverse backgrounds created inconsistencies in coding style, lack of unit‑test awareness, and difficulty for supervisors to review every line of code.
To address these challenges, an automated code‑measurement platform was built to enforce coding standards, assist with unit testing and security checks, quantify code quality, and provide actionable metrics for continuous improvement.
Goal
The platform aims to provide:
Evidence for code optimization
Metrics to assess development quality
A closed‑loop quality assurance process
Implementation focuses on four aspects:
Standardization: coding conventions, third‑party package publishing, and third‑party package inclusion rules
Automation: automated analysis, statistics, and report generation
Customization: tailored scanning rules for company, business line, agile team, and project levels
Process integration: binding quality checks to the development workflow
Platform Design
Technical Architecture
The platform leverages SonarQube with the SQALE model for static analysis, Cobra for code audit, and Synk for third‑party vulnerability scanning, forming a complete toolchain.
Java coding standards are based on Alibaba’s Java Development Manual (P3C) combined with FindBugs, PMD, and custom KuJava rules.
Unified Metrics
Quality is measured objectively, reproducibly, and automatically using SQALE‑based indicators such as test pass rate, test line coverage, duplication, complexity, code smells, bugs, and vulnerabilities.
Overall quality score = test pass score + test coverage score + duplication score + complexity score + code‑smell score + bug score + vulnerability score, each weighted according to severity.
Platform Operation
Data‑Driven Improvement
Quality trends are visualized by business group, showing metric evolution over development cycles, and top‑ranked unit‑test results are published to encourage better practices.
Process Checkpoints
Business lines define custom quality thresholds (red lines) as a quality covenant.
Each CD deployment triggers a quality check covering interface tests, code scans, third‑party package verification, and SQL pre‑checks; only fully compliant builds are released.
Applications exceeding thresholds are marked non‑compliant and blocked from further flow unless an owner approves an emergency release.
Quality Operations
Four operational stages guide improvement:
Stage 1 – Clean up critical bugs and vulnerabilities.
Stage 2 – Reduce code duplication below 10%.
Stage 3 – Embed quality by ensuring unit‑test coverage and pass rates.
Stage 4 – Incorporate security scans, code audits, and license checks to assess design quality.
Results and Vision
Deliverables include the KuJava coding standard, automated CI/CD quality gates, GitLab CI integration for static scans, and customizable business‑line rules.
The long‑term vision is a “code gate” that protects every commit to the main branch, shifting quality left so testing begins at the first line of code.
Qunhe Technology Quality Tech
Kujiale Technology Quality
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.