🤖 AI Summary
Multi-hop question answering (e.g., cross-company profit comparison) suffers from insufficient retrieval coverage and low-quality evidence, as critical facts are scattered across multiple documents—challenging standard RAG systems. This paper proposes a lightweight, training-free, index-agnostic enhancement framework: first, leveraging large language models to decompose complex questions into targeted sub-questions for multi-path directed retrieval; second, aggregating candidate passages and applying off-the-shelf cross-encoders—novel in this context—for re-ranking to improve evidence relevance and completeness. Evaluated on MultiHop-RAG and HotpotQA, the method achieves substantial gains: +36.7% MRR@10 and +11.6% answer F1. Its core contribution lies in the first synergistic integration of LLM-driven question decomposition with plug-and-play cross-encoder re-ranking, effectively mitigating information fragmentation in multi-hop settings and enhancing RAG’s capacity to integrate and reason over distributed factual knowledge.
📝 Abstract
Grounding large language models (LLMs) in verifiable external sources is a well-established strategy for generating reliable answers. Retrieval-augmented generation (RAG) is one such approach, particularly effective for tasks like question answering: it retrieves passages that are semantically related to the question and then conditions the model on this evidence. However, multi-hop questions, such as "Which company among NVIDIA, Apple, and Google made the biggest profit in 2023?," challenge RAG because relevant facts are often distributed across multiple documents rather than co-occurring in one source, making it difficult for standard RAG to retrieve sufficient information. To address this, we propose a RAG pipeline that incorporates question decomposition: (i) an LLM decomposes the original query into sub-questions, (ii) passages are retrieved for each sub-question, and (iii) the merged candidate pool is reranked to improve the coverage and precision of the retrieved evidence. We show that question decomposition effectively assembles complementary documents, while reranking reduces noise and promotes the most relevant passages before answer generation. Although reranking itself is standard, we show that pairing an off-the-shelf cross-encoder reranker with LLM-driven question decomposition bridges the retrieval gap on multi-hop questions and provides a practical, drop-in enhancement, without any extra training or specialized indexing. We evaluate our approach on the MultiHop-RAG and HotpotQA, showing gains in retrieval (MRR@10: +36.7%) and answer accuracy (F1: +11.6%) over standard RAG baselines.