SuanNi
SuanNi
Apr 10, 2026 · Information Security

How Tiny Memory Files Turn AI Assistants into Hackable Backdoors

Researchers from UC Berkeley, NUS, Tencent and ByteDance reveal that a single hidden line in an AI assistant’s memory file can trigger OpenClaw to leak core keys or erase disks, detailing a three‑dimensional CIK attack model, real‑world tests on four top LLMs, and mitigation strategies.

AI securityCIK architecturememory injection
0 likes · 11 min read
How Tiny Memory Files Turn AI Assistants into Hackable Backdoors