π€ AI Summary
In multi-hop question answering, large language models (LLMs) frequently suffer from hallucination and erroneous reasoning paths, hindering performance. To address this, we propose PathFinderβa novel framework that synergistically integrates Monte Carlo Tree Search (MCTS) with LLM-based feedback mechanisms. PathFinder employs MCTS to generate diverse reasoning trajectories; leverages an LLM-as-a-judge to evaluate and filter high-confidence paths using sub-answer recall; dynamically reconstructs failed sub-queries to enhance path robustness; and incorporates retrieval-augmented generation (RAG) for closed-loop optimization. Evaluated on multiple mainstream multi-hop QA benchmarks, PathFinder achieves significant accuracy improvements over strong baselines. It effectively mitigates hallucination, reduces erroneous reasoning, and minimizes retrieval failures. Our results empirically validate the efficacy of a search-and-feedback-coordinated, interpretable reasoning paradigm for complex multi-step QA.
π Abstract
Multi-hop question answering is a challenging task in which language models must reason over multiple steps to reach the correct answer. With the help of Large Language Models and their reasoning capabilities, existing systems are able to think and decompose an input question over multiple steps to analyze, retrieve, and reason. However, training-based approaches for this problem still suffer from LLM hallucinations and incorrect reasoning paths that hinder performance. Hence, we propose PATHFINDER, an approach that: (i) uses Monte Carlo Tree Search to generate training path traces, (ii) improves training data quality by filtering erroneous and lengthy traces using sub-answer recall and LLM-as-a-judge verification, and (iii) reformulates sub-queries to handle failed retrieval cases. By following these steps, we demonstrate that PATHFINDER improves the performance of multi-hop QA over public benchmark datasets.