Machine Learning Algorithms & Natural Language Processing
Machine Learning Algorithms & Natural Language Processing
Apr 14, 2026 · Artificial Intelligence

Balancing Usability, Fun, and Safety: How Fudan’s Post‑00 Team Built XSafeClaw for Controllable AI Agents

Amid soaring hype for autonomous agents, a Meta incident exposed how hidden execution steps can cause real‑world damage, prompting Fudan’s XSafeClaw project to deliver a visual, layer‑by‑layer security framework that makes agent behavior observable, auditable, and safely interceptable.

Agent safetyHuman-in-the-loopRuntime monitoring
0 likes · 10 min read
Balancing Usability, Fun, and Safety: How Fudan’s Post‑00 Team Built XSafeClaw for Controllable AI Agents
AI Large Model Application Practice
AI Large Model Application Practice
Apr 13, 2026 · Artificial Intelligence

How Hermes-Agent Enables Self‑Learning Skills for Autonomous AI Agents

Hermes‑Agent introduces a novel self‑learning Skill system that lets AI agents automatically capture, refine, and patch reusable knowledge from complex tasks, using a dual front‑end awareness and back‑end inspection loop, reinforced by safety guards and a reinforcement‑learning training pipeline.

AI agentsAgent safetyself‑learning
0 likes · 18 min read
How Hermes-Agent Enables Self‑Learning Skills for Autonomous AI Agents
AntTech
AntTech
Apr 2, 2026 · Information Security

How ClawAegis Secures OpenClaw AI Agents with a Native Immunity System

Ant Group’s AI Security Lab and Tsinghua University have open‑sourced ClawAegis, a native security‑immune framework for OpenClaw agents that protects the entire lifecycle—from initialization to execution—by detecting malicious skill injections, memory poisoning, permission abuse, and providing dynamic auditing, configurable policies, and resource‑level safeguards.

AI securityAgent safetyOpenClaw
0 likes · 5 min read
How ClawAegis Secures OpenClaw AI Agents with a Native Immunity System