Failure Makes the Agent Stronger: Enhancing Accuracy through Structured Reflection for Reliable Tool Interactions

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Tool-augmented large language models (LLMs) often fall into erroneous reasoning loops during multi-turn interactions and lack interpretable error diagnosis and recovery capabilities. Method: We propose a structured reflection mechanism that explicitly models the “reflection–tool invocation–result” decision pipeline, transforming reflection into a trainable and controllable intermediate step. We design a joint optimization objective—Dual Adaptive Policy Optimization (DAPO) and Guided Search Policy Optimization (GSPO)—and introduce a programmable, tool-aware reward function to enable reproducible learning of error-recovery trajectories. Results: Evaluated on BFCL v3 and our newly constructed Tool-Reflection-Bench benchmark, our approach significantly improves multi-turn tool-call success rate and error-repair accuracy while reducing redundant invocations. Empirical results demonstrate enhanced reliability, generalization across diverse tool domains, and improved interpretability through transparent reflection traces.

Technology Category

Application Category

📝 Abstract
Tool-augmented large language models (LLMs) are usually trained with supervised imitation or coarse-grained reinforcement learning that optimizes single tool calls. Current self-reflection practices rely on heuristic prompts or one-way reasoning: the model is urged to 'think more' instead of learning error diagnosis and repair. This is fragile in multi-turn interactions; after a failure the model often repeats the same mistake. We propose structured reflection, which turns the path from error to repair into an explicit, controllable, and trainable action. The agent produces a short yet precise reflection: it diagnoses the failure using evidence from the previous step and then proposes a correct, executable follow-up call. For training we combine DAPO and GSPO objectives with a reward scheme tailored to tool use, optimizing the stepwise strategy Reflect, then Call, then Final. To evaluate, we introduce Tool-Reflection-Bench, a lightweight benchmark that programmatically checks structural validity, executability, parameter correctness, and result consistency. Tasks are built as mini trajectories of erroneous call, reflection, and corrected call, with disjoint train and test splits. Experiments on BFCL v3 and Tool-Reflection-Bench show large gains in multi-turn tool-call success and error recovery, and a reduction of redundant calls. These results indicate that making reflection explicit and optimizing it directly improves the reliability of tool interaction and offers a reproducible path for agents to learn from failure.
Problem

Research questions and friction points this paper is trying to address.

Optimizing tool-augmented LLMs beyond single call training methods
Addressing fragile error recovery in multi-turn tool interactions
Making reflection explicit and trainable for reliable error diagnosis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structured reflection for explicit error repair
Combining DAPO and GSPO training objectives
Reflect-Call-Final strategy for tool interactions
🔎 Similar Papers
No similar papers found.
Junhao Su
Junhao Su
MeiTuan Inc.
Computer Vision
Y
Yuanliang Wan
Vision Agent Team, Meituan
Junwei Yang
Junwei Yang
Peking University
Natural Language ProcessingGraph Neural NetworkAi4Science
H
Hengyu Shi
MeiGen AI Team, Meituan
Tianyang Han
Tianyang Han
The Hong Kong Polytechnic University (PolyU)
Image generationMultimodal Large Language Model
J
Junfeng Luo
Vision Agent Team, Meituan; MeiGen AI Team, Meituan
Y
Yurui Qiu
Vision Agent Team, Meituan