🤖 AI Summary
Tool-augmented large language models (LLMs) often fall into erroneous reasoning loops during multi-turn interactions and lack interpretable error diagnosis and recovery capabilities.
Method: We propose a structured reflection mechanism that explicitly models the “reflection–tool invocation–result” decision pipeline, transforming reflection into a trainable and controllable intermediate step. We design a joint optimization objective—Dual Adaptive Policy Optimization (DAPO) and Guided Search Policy Optimization (GSPO)—and introduce a programmable, tool-aware reward function to enable reproducible learning of error-recovery trajectories.
Results: Evaluated on BFCL v3 and our newly constructed Tool-Reflection-Bench benchmark, our approach significantly improves multi-turn tool-call success rate and error-repair accuracy while reducing redundant invocations. Empirical results demonstrate enhanced reliability, generalization across diverse tool domains, and improved interpretability through transparent reflection traces.
📝 Abstract
Tool-augmented large language models (LLMs) are usually trained with supervised imitation or coarse-grained reinforcement learning that optimizes single tool calls. Current self-reflection practices rely on heuristic prompts or one-way reasoning: the model is urged to 'think more' instead of learning error diagnosis and repair. This is fragile in multi-turn interactions; after a failure the model often repeats the same mistake. We propose structured reflection, which turns the path from error to repair into an explicit, controllable, and trainable action. The agent produces a short yet precise reflection: it diagnoses the failure using evidence from the previous step and then proposes a correct, executable follow-up call. For training we combine DAPO and GSPO objectives with a reward scheme tailored to tool use, optimizing the stepwise strategy Reflect, then Call, then Final. To evaluate, we introduce Tool-Reflection-Bench, a lightweight benchmark that programmatically checks structural validity, executability, parameter correctness, and result consistency. Tasks are built as mini trajectories of erroneous call, reflection, and corrected call, with disjoint train and test splits. Experiments on BFCL v3 and Tool-Reflection-Bench show large gains in multi-turn tool-call success and error recovery, and a reduction of redundant calls. These results indicate that making reflection explicit and optimizing it directly improves the reliability of tool interaction and offers a reproducible path for agents to learn from failure.