The Dark Side of ChatGPT: Scams, Prompt Injection, and Security Risks
The article examines how the rapid popularity of ChatGPT has spurred both legitimate opportunities and a surge in illicit activities, including account resale, scam scripts generated via prompt injection, and the creation of malware, highlighting the need for stricter regulation and security awareness.
Hello, I'm Jack.
ChatGPT's explosive popularity is evident; some have made their first fortune while others are on the brink of legal trouble.
Whenever something becomes hot, clever individuals seize the opportunity.
When Elden Ring was trending, many sold cheat guides on Taobao; when the game "Sheep a Sheep" surged, sellers offered pass‑through aids.
Now that ChatGPT is booming, sellers on platforms like Taobao and Xianyu are offering accounts.
These items rely on speed—if you're late, you miss the chance.
Although such listings have been removed, the months of high demand generated substantial profits.
In December last year I released a ChatGPT video tutorial covering registration; some readers leveraged this information gap to earn money.
ChatGPT bootcamps have even emerged, teaching how to profit from the technology.
These activities involve little technical depth and mainly sell services or information gaps.
Developers with some coding skills have built personal chat mini‑programs offering automatic Q&A and companion packages.
However, platforms have tightened regulations—my tutorial on integrating ChatGPT with WeChat was deleted.
Those who earn legal income from hot tech are matched by those seeking illegal profit.
Recently, the foreign security platform GBHackers exposed a scam where hackers used ChatGPT to generate complete scam scripts, packaging them as virtual characters to deceive victims into love‑scams.
With AI assistance, scammers can more easily trick unsuspecting victims.
This type of fraud is entry‑level; criminals now employ prompt injection to bypass OpenAI's safety limits.
Such manipulation lets ChatGPT write malicious code, and these applications have been sold on the US dark web.
Criminals can even use ChatGPT to harvest programs.
For example, a post titled "ChatGPT—Benefits of Malware" on a foreign hacker forum detailed how to build a stealing program that can collect, compress, and transmit twelve file types.
The same tool can generate virus code, ransomware, phishing sites, and more.
Clearly, ChatGPT equips less‑skilled criminals with powerful tools.
The crime threshold has lowered, and attackers now use AI to invade others.
Thus, domestic bans on ChatGPT‑related content are understandable.
Rambling
Technology is a double‑edged sword; controlling its usage, ensuring compliance, and protecting data security are critical.
Beyond ChatGPT, other AIGC algorithms are also potent tools.
Recently I saw a project that integrates ChatGPT: It uses an OpenAI API key to create a virtual companion that can chat.
Features include text dialogue, voice recognition, and expression‑driven interaction.
Project address: https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain
Is a virtual boyfriend/girlfriend still far away? Probably not.
That's all for today.
IT Services Circle
Delivering cutting-edge internet insights and practical learning resources. We're a passionate and principled IT media platform.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.