🤖 AI Summary
Existing RAG methods tightly couple query rewriting with dense retrievers, severely limiting their compatibility with hybrid retrieval (sparse/dense/web) and multi-hop reasoning capabilities. This paper proposes a hierarchical retrieval architecture: a high-level searcher decomposes complex queries and orchestrates multi-hop reasoning logic in a decoupled manner, while low-level searchers—including a Lucene-syntax-based specialized sparse retriever, a dense retriever, and a web retriever—execute retrieval in parallel. Our approach achieves the first full decoupling of query rewriting from retrieval components, enabling flexible, extensible hybrid scheduling and multi-source fusion. Evaluated on five single- and multi-hop QA benchmarks, our method consistently outperforms mainstream RAG baselines and significantly surpasses GPT-4o, achieving state-of-the-art performance in both answer accuracy and completeness.
📝 Abstract
Retrieval-Augmented Generation (RAG) is a crucial method for mitigating hallucinations in Large Language Models (LLMs) and integrating external knowledge into their responses. Existing RAG methods typically employ query rewriting to clarify the user intent and manage multi-hop logic, while using hybrid retrieval to expand search scope. However, the tight coupling of query rewriting to the dense retriever limits its compatibility with hybrid retrieval, impeding further RAG performance improvements. To address this challenge, we introduce a high-level searcher that decomposes complex queries into atomic queries, independent of any retriever-specific optimizations. Additionally, to harness the strengths of sparse retrievers for precise keyword retrieval, we have developed a new sparse searcher that employs Lucene syntax to enhance retrieval accuracy.Alongside web and dense searchers, these components seamlessly collaborate within our proposed method, extbf{LevelRAG}. In LevelRAG, the high-level searcher orchestrates the retrieval logic, while the low-level searchers (sparse, web, and dense) refine the queries for optimal retrieval. This approach enhances both the completeness and accuracy of the retrieval process, overcoming challenges associated with current query rewriting techniques in hybrid retrieval scenarios. Empirical experiments conducted on five datasets, encompassing both single-hop and multi-hop question answering tasks, demonstrate the superior performance of LevelRAG compared to existing RAG methods. Notably, LevelRAG outperforms the state-of-the-art proprietary model, GPT4o, underscoring its effectiveness and potential impact on the RAG field.