🤖 AI Summary
To address the limited performance of retrieval-augmented generation (RAG) in multi-document information integration and complex reasoning—particularly evident on the QUEST-LOFT benchmark—this paper proposes a novel enhancement framework. Our method enforces structured output generation, explicitly articulating both reasoning chains and supporting evidence, and incorporates an answer re-verification mechanism to jointly optimize retrieval, reasoning, and validation. Crucially, it operates without requiring ultra-long-context modeling, building instead upon standard RAG architectures. A rigorous human evaluation protocol ensures result reliability. Experiments demonstrate that our approach significantly outperforms state-of-the-art long-context language models and mainstream RAG baselines on QUEST-LOFT. These results validate that structured reasoning guidance and iterative verification are critical for improving robustness in complex question answering. The framework establishes a scalable, interpretable paradigm for RAG systems tackling distributed knowledge and deep-reasoning tasks.
📝 Abstract
Despite the popularity of retrieval-augmented generation (RAG) as a solution for grounded QA in both academia and industry, current RAG methods struggle with questions where the necessary information is distributed across many documents or where retrieval needs to be combined with complex reasoning. Recently, the LOFT study has shown that this limitation also applies to approaches based on long-context language models, with the QUEST benchmark exhibiting particularly large headroom. In this paper, we provide an in-depth analysis of the factors contributing to the poor performance on QUEST-LOFT, publish updated numbers based on a thorough human evaluation, and demonstrate that RAG can be optimized to significantly outperform long-context approaches when combined with a structured output format containing reasoning and evidence, optionally followed by answer re-verification.