Industry Insights 11 min read

Why Cal.com Closed Its Source: AI‑Driven Threats Redefining Open‑Source Security

The article analyzes Cal.com’s abrupt shift to a closed‑source model, arguing that AI‑powered vulnerability discovery has turned open‑source transparency from a defensive advantage into a liability, and explores industry reactions, supporting data, and broader implications for the future of open‑source software.

ITPUB
ITPUB
ITPUB
Why Cal.com Closed Its Source: AI‑Driven Threats Redefining Open‑Source Security

Background and Decision

Cal.com announced that its production code—including enterprise workflow automation, team collaboration, authentication, data handling, and AI‑driven phone features—will be moved to a private repository. A stripped‑down MIT‑licensed branch called Cal.diy remains public but excludes the aforementioned enterprise components.

AI‑driven shift in vulnerability discovery

In early April 2026 Anthropic released Claude Mythos Preview, an autonomous model that discovered thousands of high‑severity zero‑day bugs across operating systems and browsers. Notable findings include a 27‑year‑old remote‑code‑execution flaw in OpenBSD and a 16‑year‑old vulnerability in the FFmpeg media library that had survived roughly 5 million prior automated scans.

Mythos was able to chain independent bugs into complex attack paths, escape sandbox environments, and generate exploit code that succeeded 181 times out of several hundred attempts. By comparison, the preceding model succeeded only twice under the same conditions. Earlier models such as Claude Opus already could target open‑source repositories and locate vulnerabilities with minimal effort.

Quantitative security landscape

Hex Security CEO Huzaifa Ahmad states that exploiting open‑source applications is 5–10 times easier than exploiting closed‑source equivalents.

The Cloud Security Alliance reports that the average time to exploit a vulnerability has dropped below 20 hours, while traditional patch cycles still assume response times measured in days to weeks.

This creates an attacker pipeline—discovery, analysis, exploit generation—completed in hours, whereas defenders continue to operate on a days‑to‑weeks timeline, producing a pronounced asymmetry.

Community reactions and trade‑offs

Within 48 hours Discourse published a rebuttal noting that AI tools such as GPT‑5.3, GPT‑5.4, and Claude Opus 4.6 have uncovered numerous issues in their own codebase. They argue that source code is not a prerequisite for AI‑driven attacks; compiled binaries and black‑box APIs are equally vulnerable, so closing the source does not fundamentally improve security.

Linux kernel maintainer Greg Kroah‑Hartman demonstrates a contrasting approach by employing AI‑assisted fuzzing named clanker to proactively discover and patch kernel bugs, illustrating that transparency can coexist with AI‑enhanced defense.

Cal.com co‑founder Peer Richelsen emphasizes that open‑source security has historically depended on human bug‑finders, and AI attackers now exploit that openness. He recommends privatizing sensitive components across open‑source projects, positioning Cal.com’s move as a potential industry trend.

Broader economic impact of AI on open‑source

A 2024 research report shows AI‑assisted coding reduced Tailwind CSS documentation traffic by 40 % and cut related revenue by roughly 80 %.

Technical Q&A platform Stack Overflow experienced a noticeable decline in traffic.

The curl project reported that 20 % of security reports received in 2025 were AI‑generated false positives.

Conclusion

The convergence of autonomous AI bug discovery, rapid exploit generation, and measurable economic erosion challenges the traditional “many eyes” security model. Cal.com’s migration to a private repository exemplifies a shift toward privatizing critical components to mitigate AI‑driven threats, while the open‑source community debates whether transparency can be preserved through AI‑assisted defenses.

Reference: https://cal.com/de/blog/cal-com-goes-closed-source-why

software developmentopen-sourceAI securityvulnerability analysisindustry insights
ITPUB
Written by

ITPUB

Official ITPUB account sharing technical insights, community news, and exciting events.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.