Why AI Coding Only Solves About 70% of the Work: The Hidden Trust Gap

Addy Osmani’s analysis shows that while AI can generate roughly 70% of code—handling scaffolding and common patterns—the remaining 30% involving edge cases, security, and production integration remains as hard as before, and developers’ trust in AI‑generated code is rapidly declining.

Wuming AI
Wuming AI
Wuming AI
Why AI Coding Only Solves About 70% of the Work: The Hidden Trust Gap

Google Chrome engineer Addy Osmani has been tracking AI‑assisted development for two years. Within Google, more than 30% of code is now produced by AI, and he has gathered data from industry conferences such as Lead Dev to measure adoption, trust, and productivity.

The 70% Problem

Osmani observes a consistent pattern: AI can quickly deliver about 70% of a solution—generating scaffolding, boilerplate, and common design patterns. The remaining 30%—including handling edge cases, ensuring security, integrating with production systems, and managing API keys—still requires the same amount of effort as before.

"AI coding tools can help you finish most of the work, but not all. AI easily creates the illusion of a complete UI or a PRD with a few prompts, yet the internals are often a temporary patchwork." – Addy Osmani
"AI can generate roughly 70% of the code for an application or feature, such as scaffolding and generic patterns. The remaining 30%—extra debugging for boundary conditions, production integration, security checks, API‑key management—takes just as much time as before." – Addy Osmani

Trust Crisis

Despite high adoption, trust is falling. In the past two years, developers’ favorability toward AI coding dropped from 70% to 60%, and about 30% of engineers report little or no trust in AI‑generated code. Osmani warns that this erosion of confidence is “crazy” given the growing reliance on these tools.

"Adoption is high, but trust is surprisingly low and continues to decline. Favorability fell from 70% to 60%, and roughly 30% of people say they almost never trust code written by AI." – Addy Osmani
"You don’t want a bug that lands on Hacker News." – Addy Osmani

“Step‑Back” Mode: When AI Fixes Create More Bugs

Osmani describes a feedback loop: an AI suggests a seemingly reasonable change to fix a bug, but the change breaks other parts of the system. When the AI is asked to fix the new issue, it often introduces additional problems, sometimes spawning five new bugs from a single request.

"You want to fix a bug, AI proposes a plausible edit, maybe even a plan inside the IDE. The fix ends up breaking other things. You ask AI to fix that, and it creates two new problems, and the cycle repeats. Sometimes it even creates five new issues." – Addy Osmani
"If I let AI do a task, it rewrites code in five different places and I have no idea how everything works together. I have to go back and understand the whole system—that’s a basic requirement for me." – Addy Osmani

Overall, Osmani concludes that while AI dramatically speeds up the bulk of coding work, the critical, high‑risk portions remain manual, and the decreasing trust signals a need for better validation, observability, and developer control over AI‑generated output.

AI codingsoftware developmentproductivityindustry analysisTrustAddy Osmani
Wuming AI
Written by

Wuming AI

Practical AI for solving real problems and creating value

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.