Robust Tool Use via Fission-GRPO: Learning to Recover from Execution Errors

📅 2026-01-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that large language models struggle to self-correct after encountering execution errors during multi-turn tool usage, often leading to repetitive and ineffective tool calls that undermine deployment reliability. To this end, the authors propose the Fission-GRPO framework, which, for the first time, dynamically generates error-recovery training samples aligned with the current policy within the reinforcement learning loop. By leveraging an error simulator to provide diagnostic feedback and integrating trajectory fission with on-policy resampling, the method converts execution failures into corrective supervision signals. This approach circumvents the distributional shift inherent in static synthetic data and substantially enhances error recovery capabilities. Evaluated on the BFCL v4 Multi-Turn benchmark, the method improves the error recovery rate of Qwen3-8B by 5.7% and boosts overall accuracy from 42.75% to 46.75%, outperforming existing specialized tool-using agents.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) can call tools effectively, yet they remain brittle in multi-turn execution: following a tool call error, smaller models often degenerate into repetitive invalid re-invocations, failing to interpret error feedback and self-correct. This brittleness hinders reliable real-world deployment, where the execution errors are inherently inevitable during tool interaction procedures. We identify a key limitation of current approaches: standard reinforcement learning (RL) treats errors as sparse negative rewards, providing no guidance on how to recover, while pre-collected synthetic error-correction datasets suffer from distribution mismatch with the model's on-policy error modes. To bridge this gap, we propose Fission-GRPO, a framework that converts execution errors into corrective supervision within the RL training loop. Our core mechanism fissions each failed trajectory into a new training instance by augmenting it with diagnostic feedback from a finetuned Error Simulator, then resampling recovery rollouts on-policy. This enables the model to learn from the precise errors it makes during exploration, rather than from static, pre-collected error cases. On the BFCL v4 Multi-Turn, Fission-GRPO improves the error recovery rate of Qwen3-8B by 5.7% absolute, crucially, yielding a 4% overall accuracy gain (42.75% to 46.75%) over GRPO and outperforming specialized tool-use agents.
Problem

Research questions and friction points this paper is trying to address.

tool use
execution errors
error recovery
large language models
multi-turn interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fission-GRPO
error recovery
tool use
reinforcement learning
Error Simulator
🔎 Similar Papers
No similar papers found.