AI Is Writing Code at Scale—Who’s Checking It?
Recent surveys reveal that over half of code in many organizations is now generated by AI, often deployed without review, raising significant supply‑chain security risks; developers express concern that AI amplifies malicious software threats, while current trust models and tooling lag behind the rapid adoption.
In many organizations, more than half of the code is now generated by artificial intelligence, and most of this code is deployed to production with little or no oversight.
The Cloudsmith "2025 Artifact Management Report" states that AI is now writing code at scale; 42% of developers say at least half of their code is AI‑generated, 16.6% say most of it is, and 3.6% claim all of it is.
A GitHub survey covering the US, Brazil, Germany and India found that over 97% of respondents have used AI coding tools at work, with company support ranging from 88% in the US to 59% in Germany.
Cloudsmith warns that while large language models can boost productivity, they may unintentionally recommend non‑existent or malicious packages, creating high risk.
When asked whether AI would increase open‑source malware threats, 79.2% of developers believe AI will raise the amount of malicious software in the environment, 30% say it will significantly increase exposure, and 13% think AI could help prevent or reduce threats. Forty percent view code generation as the biggest risk area.
In practice, one‑third of developers do not review AI‑generated code before each deployment, meaning a large portion of source code lacks any review and introduces growing vulnerabilities to software supply chains.
Two‑thirds of developers say they only trust AI‑generated code after human review, yet AI’s share of global codebases continues to grow.
These findings indicate that AI is introducing new, large‑scale risks to traditional concerns such as artifact integrity, dependency management, and SBOMs, making them more complex.
Cloudsmith sees this as a turning point in software engineering: AI will become a key contributor to the software stack, but trust models, tools, and strategies have not kept pace, and relying on manual code review is unsustainable.
They advocate improving artifact management with intelligent access control, end‑to‑end visibility, dynamic access‑control policies, and a robust policy‑as‑code framework.
Specifically, they propose automated policies to identify unreviewed or untrusted AI‑generated code and use provenance tracking to distinguish human‑written from AI‑generated code, integrating trust signals directly into the development workflow.
21CTO
21CTO (21CTO.com) offers developers community, training, and services, making it your go‑to learning and service platform.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
