๐ค AI Summary
This work investigates how search mechanisms and self-feedback can collaboratively enhance the reasoning capabilities of language agents on complex tasks such as mathematical reasoning and tool use. We propose a domain-customized tree/graph search framework integrating task-adapted feedback modeling, self-feedback scoring, and dynamic path pruning. Our analysis reveals, for the first time, that self-feedback alone suffers from significant generalization limitations in search-based reasoning; incorporating ground-truth feedback improves mathematical reasoning accuracy by 12.3%; and domain-specific search design for tool invocation boosts success rate by 27.6%. The core contributions are: (i) a precise delineation of the applicability boundaries between self-feedback and ground-truth feedback, and (ii) a scalable, synergistic searchโfeedback paradigm that establishes a new methodological foundation for complex reasoning tasks.
๐ Abstract
Recent works have demonstrated that incorporating search during inference can significantly improve reasoning capabilities of language agents. Some approaches may make use of the ground truth or rely on model's own generated feedback. The search algorithm uses this feedback to then produce values that will update its criterion for exploring and exploiting various reasoning paths. In this study, we investigate how search and model's self-feedback can be leveraged for reasoning tasks. First, we explore differences in ground-truth feedback and self-feedback during search for math reasoning. Second, we observe limitations in applying search techniques to more complex tasks like tool-calling and design domain-specific approaches to address these gaps. Our experiments reveal challenges related to generalization when solely relying on self-feedback during search. For search to work effectively, either access to the ground-truth is needed or feedback mechanisms need to be carefully designed for the specific task.