Agentic Development Cycle: Real‑Time Validation with AI Agents
The AC/DC framework reimagines software development by moving verification from post‑commit CI pipelines to the moment code is generated, letting AI agents write, check, and fix code in a self‑correcting loop, while redefining human engineer responsibilities.
Background and Motivation
In March 2026 Sonar announced the AC/DC framework (Agent Centric Development Cycle) to address the structural mismatch between traditional continuous integration (CI) and AI coding agents. Traditional CI assumes human‑written, incremental commits and post‑commit quality gates, which does not fit the asynchronous, batch‑oriented output of agents.
Carnegie Mellon tracked 807 open‑source projects that used AI coding agents. The study found a short‑term speed boost, but after three months code‑analysis warnings rose 30% and code complexity rose 41% , indicating that rapid generation accumulates technical debt.
Core Idea of AC/DC
AC/DC shifts verification from the end of the pipeline to the moment of code generation, creating a self‑correcting closed loop where the agent writes, validates, and repairs code continuously.
Four‑Stage Closed Loop
Guide : Inject project‑specific context (architecture boundaries, coding standards, security policies) into the agent via Model Context Protocol (MCP). Context is injected on‑demand for the current file and task, reducing token usage and improving build and test pass rates.
Generate : The agent produces code inside an isolated sandbox using any coding tool (Claude Code, Cursor, Copilot, Codex, etc.). The agent first emits a design proposal, which a human reviews before implementation.
Verify : Validation occurs in two layers:
Inner loop : After each reasoning step the agent runs static analysis tools and immediately fixes detected issues ("write‑and‑verify").
Outer loop : After the sandbox run, a deterministic cross‑file analysis scans the entire output, providing a safety net.
Solve : Detected problems are handed to a remediation agent (e.g., SonarQube Remediation Agent) that automatically generates fixes or creates PRs for historical debt. Each fix is re‑scanned to ensure no new issues are introduced.
Structural Comparison with Traditional CI
Design focus : Traditional CI is human‑centric; AC/DC is agent‑centric.
Validation timing : CI validates after commit; AC/DC validates during code generation.
Quality responsibility : Human developers + CI gates vs. agent self‑check + deterministic system fallback.
Context injection : Human knowledge vs. MCP‑driven real‑time project rules.
Defect fixing : Manual edits vs. automated remediation agent with optional human review.
Learning mechanism : Post‑mortem human retrospectives vs. automatic feedback from Solve to Guide.
Redefining Human Engineer Roles
Constraint definers : Specify architecture boundaries, coding standards, and security policies that agents must obey.
Quality reviewers : Audit agent outputs at key decision points rather than line‑by‑line code review.
Architecture decision makers : Guide system evolution, module decomposition, and technical‑debt prioritisation.
Adoption Roadmap
AC/DC is introduced in four practical steps:
Start with low‑risk scenarios by configuring the Guide mechanism (e.g., an AGENTS.md file) so agents know coding standards.
Enable Agentic Analysis in existing code‑analysis platforms to let agents invoke static‑analysis tools during generation, first activating the inner loop, then the outer loop.
Pilot the Remediation Agent on high‑confidence, low‑risk issues, with human review of generated fixes.
Close the loop: collect Solve‑stage repair data to refine Guide‑stage context injection.
The principle is to avoid full automation initially; let agents master self‑validation before expanding self‑repair.
Open Challenges
Evaluation metrics : No systematic method yet to measure the effectiveness of agent self‑validation.
Governance granularity : Unclear boundaries for which quality gates require human confirmation.
Human attention economics : With agent output speed up tenfold, determining which review steps need human bandwidth remains unsolved.
Technical‑debt transfer : Continuous agent‑generated code and repairs may hide debt without architectural oversight.
Overall, AC/DC offers a structured approach to integrate AI agents into the software development lifecycle, but further research is needed to define metrics, governance, and sustainable human‑agent collaboration.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
AI Step-by-Step
Sharing AI knowledge, practical implementation records, and more.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
