Learning When to Act or Refuse: Guarding Agentic Reasoning Models for Safe Multi-Step Tool Use

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical safety challenges in multi-step tool-augmented language agents, where a single erroneous action—such as accessing sensitive files—can lead to irreversible harm, and conventional alignment methods struggle with sequential decision-making, adversarial feedback, and overconfident reasoning. To mitigate these risks, we propose MOSAIC, a novel framework that explicitly models safe reasoning and refusal as first-class, learnable actions. MOSAIC employs a “plan–check–execute or refuse” loop to enable dynamic safety control and is trained via preference-based reinforcement learning without requiring trajectory-level annotations. Zero-shot evaluations on Qwen2.5-7B, Qwen3-4B-Thinking, and Phi-4 demonstrate that MOSAIC reduces harmful behaviors by up to 50%, improves refusal rates against adversarial prompt injections by over 20%, significantly curtails privacy leaks, and maintains or even enhances performance on benign tasks.

Technology Category

Application Category

📝 Abstract
Agentic language models operate in a fundamentally different safety regime than chat models: they must plan, call tools, and execute long-horizon actions where a single misstep, such as accessing files or entering credentials, can cause irreversible harm. Existing alignment methods, largely optimized for static generation and task completion, break down in these settings due to sequential decision-making, adversarial tool feedback, and overconfident intermediate reasoning. We introduce MOSAIC, a post-training framework that aligns agents for safe multi-step tool use by making safety decisions explicit and learnable. MOSAIC structures inference as a plan, check, then act or refuse loop, with explicit safety reasoning and refusal as first-class actions. To train without trajectory-level labels, we use preference-based reinforcement learning with pairwise trajectory comparisons, which captures safety distinctions often missed by scalar rewards. We evaluate MOSAIC zero-shot across three model families, Qwen2.5-7B, Qwen3-4B-Thinking, and Phi-4, and across out-of-distribution benchmarks spanning harmful tasks, prompt injection, benign tool use, and cross-domain privacy leakage. MOSAIC reduces harmful behavior by up to 50%, increases harmful-task refusal by over 20% on injection attacks, cuts privacy leakage, and preserves or improves benign task performance, demonstrating robust generalization across models, domains, and agentic settings.
Problem

Research questions and friction points this paper is trying to address.

agentic reasoning
tool use safety
multi-step decision-making
harmful behavior
privacy leakage
Innovation

Methods, ideas, or system contributions that make the work stand out.

agentic reasoning
safe tool use
refusal as action
preference-based reinforcement learning
trajectory comparison
🔎 Similar Papers
No similar papers found.
A
Aradhye Agarwal
Microsoft Research
G
Gurdit Siyan
Microsoft Research
Y
Yash Pandya
Microsoft Research
J
Joykirat Singh
Microsoft Research
Akshay Nambi
Akshay Nambi
Principal Researcher, Microsoft Research
Machine LearningLLMsAI for Social GoodMobile Computing & Systems
A
Ahmed Awadallah
Microsoft Research