Why AI Coding Tools Leak Data by Default: Lessons from Recent High‑Profile Breaches
Recent data‑leak incidents at AI coding platforms such as Lovable, Vercel, and Anthropic reveal that insecure default settings, supply‑chain compromises, and inadequate threat modeling expose user code and chat logs, prompting experts to warn about the trade‑off between usability and security.
Several well‑known AI coding companies—including Lovable, Vercel, and Anthropic—have recently suffered major data‑exposure incidents. The causes vary from default designs that reveal all user data, to supply‑chain attacks on employee permissions, and to simple operational mistakes.
Lovable, a Swedish AI‑programming startup, was accused by an X user (Impulsive) of allowing free‑account access to another user’s code, AI chat history, and client data, affecting every project created before November 2025. The company first denied a breach, claiming public project visibility was intentional, but later issued a second statement admitting that a February backend permission change unintentionally re‑enabled public access to project chats. Lovable rolled back the change and thanked the researcher who discovered the issue.
Security experts highlighted the systemic problem. Tom Van de Wiele of Hacker Minded said the incident shows that without security‑by‑default and proper threat modeling for the AI era, such flaws are inevitable. ESET’s global network‑security advisor Jake Moore argued the event is not a traditional hack but a design flaw, emphasizing that relying on users to decide what is public often fails.
The discussion underscores a fundamental trade‑off between ease of use and security. Professional developers caution against over‑reliance on AI‑generated code because it can be messy and insufficiently tested, and they note that AI‑assisted tools can inadvertently expose company data. Companies face a dilemma: lowering entry barriers for new users while protecting data from inadvertent harvesting. Moore warned that default settings can expose sensitive information without any attacker activity.
These incidents are part of a broader pattern. In late March, Anthropic mistakenly released an archive containing nearly 2,000 files and 500,000 lines of code, asserting no sensitive client data was involved. Earlier this week, Vercel disclosed a breach where a compromised third‑party tool (Context.ai) allowed an attacker to hijack a staff Google Workspace account and gain partial access to Vercel’s environment; Vercel engaged incident‑response specialists and notified law‑enforcement.
Industry voices echo the concerns. Andreessen Horowitz partner Anish Acharya, speaking on a February podcast, warned that AI assistance should not be embedded in every business layer because the associated risks outweigh the benefits. Overall, the series of breaches illustrates how insecure defaults in AI programming platforms can amplify data‑exposure risks.
Reference: Business Insider
Black & White Path
We are the beacon of the cyber world, a stepping stone on the road to security.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
