WormGPT: The Dark Twin of ChatGPT Empowering Cybercriminals
WormGPT, a €60‑per‑month black‑hat AI built on GPT‑J, can generate malicious code, phishing emails and other illegal content, exposing serious security risks and prompting experts to recommend BEC training and stricter email verification to mitigate AI‑driven cyber attacks.
In recent months, as ChatGPT’s popularity surged, OpenAI faced scrutiny over AI ethics and data security, even prompting a formal FTC investigation—the first U.S. regulator probe of AI chat‑bot risks.
Amid this scrutiny, a “borderless” version of ChatGPT called WormGPT began circulating on the dark web, advertised at €60 (≈ CNY 479) per month. Security firm SlashNext first reported the tool, describing it as a black‑hat alternative that lets users perform any illegal activity they can imagine.
WormGPT is built on the open‑source 2021 GPT‑J model and operates similarly to ChatGPT: it accepts natural‑language prompts and can output stories, summaries, or code. Unlike ChatGPT or Google Bard, it carries no legal or moral obligations, allowing it to generate malicious software, phishing emails, or any “black‑hat” content on demand.
According to SlashNext, the model was trained on a variety of data sources, heavily weighted toward malware‑related material. Its unrestricted output makes it a powerful weapon for cybercriminals.
NordVPN security researcher Adrianus Warmenhoven called WormGPT “the evil twin of ChatGPT,” noting that it emerged as attackers sought to bypass the safeguards imposed on legitimate models.
To assess the threat, SlashNext conducted a Business Email Compromise (BEC) test. The AI was asked to draft a convincing email pressuring an unwary account manager to pay a fake invoice. The resulting message was highly persuasive and strategically clever, demonstrating WormGPT’s potential for sophisticated phishing campaigns.
Two key advantages of generative AI for BEC attacks emerged:
Flawless grammar that reduces the likelihood of spam filters flagging the email.
Lowered technical barrier, enabling even inexperienced attackers to launch convincing campaigns.
SlashNext recommends two defensive measures: (1) regular, targeted BEC training that includes AI‑enhanced attack scenarios, and (2) strict email verification workflows that trigger alerts when external messages impersonate internal executives or vendors.
Beyond WormGPT, recent “grandma loophole” incidents show that ChatGPT can be coaxed into revealing Windows product keys, underscoring ongoing challenges in AI safety and ethics.
These developments highlight the need for continued research into data quality, algorithmic safeguards, and ethical considerations, while users must remain vigilant and avoid over‑reliance on AI tools.
Programmer DD
A tinkering programmer and author of "Spring Cloud Microservices in Action"
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
