Unsafer in Many Turns: Benchmarking and Defending Multi-Turn Safety Risks in Tool-Using Agents

📅 2026-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical gap in existing benchmarks, which overlook the safety risks of tool-using agents in multi-turn interactions, leading to a disconnect between capability and safety. The study presents the first systematic taxonomy of safety risks specific to multi-turn tool-use scenarios and introduces MT-AgentRisk, the first multi-turn safety evaluation benchmark tailored for such agents. Furthermore, it proposes ToolShield, a training-free, tool-agnostic dynamic defense framework that enables proactive protection through self-exploratory safety testing and experience distillation. Experimental results demonstrate that multi-turn attacks achieve an average success rate increase of 16%, while ToolShield effectively reduces this success rate by 30%.

Technology Category

Application Category

📝 Abstract
LLM-based agents are becoming increasingly capable, yet their safety lags behind. This creates a gap between what agents can do and should do. This gap widens as agents engage in multi-turn interactions and employ diverse tools, introducing new risks overlooked by existing benchmarks. To systematically scale safety testing into multi-turn, tool-realistic settings, we propose a principled taxonomy that transforms single-turn harmful tasks into multi-turn attack sequences. Using this taxonomy, we construct MT-AgentRisk (Multi-Turn Agent Risk Benchmark), the first benchmark to evaluate multi-turn tool-using agent safety. Our experiments reveal substantial safety degradation: the Attack Success Rate (ASR) increases by 16% on average across open and closed models in multi-turn settings. To close this gap, we propose ToolShield, a training-free, tool-agnostic, self-exploration defense: when encountering a new tool, the agent autonomously generates test cases, executes them to observe downstream effects, and distills safety experiences for deployment. Experiments show that ToolShield effectively reduces ASR by 30% on average in multi-turn interactions. Our code is available at https://github.com/CHATS-lab/ToolShield.
Problem

Research questions and friction points this paper is trying to address.

multi-turn safety
tool-using agents
safety benchmark
LLM agents
attack success rate
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-turn safety
tool-using agents
MT-AgentRisk
ToolShield
attack success rate