🤖 AI Summary
This work addresses the complex safety risks faced by AI agents in autonomous tool use and environmental interaction, where existing safeguards lack systematic risk characterization and interpretable diagnostics. To bridge this gap, we propose the first three-dimensional orthogonal risk taxonomy for AI agents—spanning risk sources, failure modes, and consequences—and leverage it to develop ATBench, a fine-grained safety benchmark, along with AgentDoG, a diagnostic defense framework enabling trajectory-level, context-aware monitoring and root-cause tracing. Experimental results demonstrate that our approach significantly outperforms current safety auditing methods across diverse interactive scenarios, overcoming the limitations of conventional binary safety labels. The code, models (Qwen/Llama series, 4B–8B), and dataset are publicly released.
📝 Abstract
The rise of AI agents introduces complex safety and security challenges arising from autonomous tool use and environmental interactions. Current guardrail models lack agentic risk awareness and transparency in risk diagnosis. To introduce an agentic guardrail that covers complex and numerous risky behaviors, we first propose a unified three-dimensional taxonomy that orthogonally categorizes agentic risks by their source (where), failure mode (how), and consequence (what). Guided by this structured and hierarchical taxonomy, we introduce a new fine-grained agentic safety benchmark (ATBench) and a Diagnostic Guardrail framework for agent safety and security (AgentDoG). AgentDoG provides fine-grained and contextual monitoring across agent trajectories. More Crucially, AgentDoG can diagnose the root causes of unsafe actions and seemingly safe but unreasonable actions, offering provenance and transparency beyond binary labels to facilitate effective agent alignment. AgentDoG variants are available in three sizes (4B, 7B, and 8B parameters) across Qwen and Llama model families. Extensive experimental results demonstrate that AgentDoG achieves state-of-the-art performance in agentic safety moderation in diverse and complex interactive scenarios. All models and datasets are openly released.