ByteDance SE Lab
ByteDance SE Lab
Apr 1, 2026 · Information Security

How Hidden Prompt Attacks Threaten OpenClaw Agents and the AgentArmor Defense

The article analyzes how malicious prompt injections can hijack OpenClaw agents' decision logic, outlines three core risk categories—intent deviation, workflow hijack, and data leakage—and presents AgentArmor's runtime protection framework that uses intent alignment, control‑flow integrity, and data‑flow confidentiality checks to mitigate these threats.

AI securityAgentArmorOpenClaw
0 likes · 19 min read
How Hidden Prompt Attacks Threaten OpenClaw Agents and the AgentArmor Defense
Black & White Path
Black & White Path
Mar 10, 2026 · Information Security

Inside OpenClaw Skill Market: Popularity, Threats, and Defense Strategies

The article analyzes OpenClaw’s rapidly growing Skill ecosystem, exposing over 600 malicious plugins hidden among 13,000+ skills, details four poisoning techniques, presents a multi‑source detection pipeline with AI‑driven semantic audit, and offers practical defenses for both enterprises and ordinary users.

AI securityAgentArmorOpenClaw
0 likes · 18 min read
Inside OpenClaw Skill Market: Popularity, Threats, and Defense Strategies
Volcano Engine Developer Services
Volcano Engine Developer Services
Oct 23, 2025 · Artificial Intelligence

How Jeddak AgentArmor Secures AI Agents: A Deep Dive into Trustworthy AI

This article examines ByteDance's Jeddak AgentArmor framework, detailing the systemic risks of intent misinterpretation and constraint violations in AI agents, the full‑lifecycle threat model, dual probabilistic trust and policy mechanisms, and real‑world validation cases that demonstrate its effectiveness.

AI securityAgentArmorpolicy compliance
0 likes · 15 min read
How Jeddak AgentArmor Secures AI Agents: A Deep Dive into Trustworthy AI
Volcano Engine Developer Services
Volcano Engine Developer Services
Oct 9, 2025 · Artificial Intelligence

Why AI Agents Risk Losing Control and How AgentArmor Secures Them

The article examines the emerging security challenges of AI agents, outlines four fundamental vulnerabilities, and introduces the AgentArmor framework—featuring a graph constructor, property registry, and type system—to compile agent behavior into verifiable programs and dramatically reduce attack success rates.

AI AgentAgentArmorProgram Dependency Graph
0 likes · 15 min read
Why AI Agents Risk Losing Control and How AgentArmor Secures Them