When AI Becomes the Suspect: Dissecting a Crypto Theft and Code‑Poisoning Case
A crypto firm lost hundreds of thousands of USDT after a hard‑coded wallet address, allegedly inserted by an employee who blamed AI, prompting investigators to rule out AI misconduct and highlight human sabotage, while a separate ChatGPT‑generated code snippet secretly exfiltrated private keys, underscoring the emerging security risks of AI‑assisted programming.
Yesterday on Twitter a crypto company reported that dozens of thousands of USDT were transferred to a wallet whose address was hard‑coded in the source code submitted by an employee. The employee denied adding the line and claimed the AI was responsible, but the code review missed it.
The security team led by Yu Xian concluded that, based on their experience, AI is unlikely to return keys or passwords, and the current investigation points to human sabotage rather than AI misbehavior.
Another case involved a programmer using ChatGPT to modify code, which introduced a poisoned snippet that, when executed, sent a private key in the request body to a phishing site.
The malicious code originated from a GitHub repository intended for AI training; it inadvertently embedded a backdoor that could be harvested by users, making it unclear whether the AI actively consumed the code or the attacker injected it.
In the AI era, while code generation is convenient, the security of AI‑generated code becomes a major concern; blaming AI may be an easy excuse, but the real danger lies in human misuse and the potential for AI‑assisted attacks.
Java Backend Technology
Focus on Java-related technologies: SSM, Spring ecosystem, microservices, MySQL, MyCat, clustering, distributed systems, middleware, Linux, networking, multithreading. Occasionally cover DevOps tools like Jenkins, Nexus, Docker, and ELK. Also share technical insights from time to time, committed to Java full-stack development!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
