🤖 AI Summary
This work addresses the challenge that large language models and existing dense retrievers struggle to effectively access rare knowledge in long-tail question answering tasks. To this end, the authors propose the RPDR framework, which enhances retrieval capability for long-tail knowledge through a three-stage pipeline: synthetic data generation, round-trip prediction–based sample selection, and retriever training. The key innovation lies in a round-trip prediction mechanism that identifies high-value training samples, coupled with a dynamic routing strategy that directs queries to specialized retrieval modules. Experimental results demonstrate that RPDR significantly outperforms baseline methods—including BM25 and Contriver—on the PopQA and EntityQuestion benchmarks, with particularly pronounced gains in the extreme long-tail categories.
📝 Abstract
Long-tail question answering presents significant challenges for large language models (LLMs) due to their limited ability to acquire and accurately recall less common knowledge. Retrieval-augmented generation (RAG) systems have shown great promise in mitigating this limitation by integrating external retrieval mechanisms. However, dense retrieval models often face the same difficulties when generalizing to rare or niche knowledge. In this study, we introduce RPDR, a novel data augmentation framework that selects high-quality easy-to-learn training data, to enhance dense retrievers. Our approach is built around three core components: synthetic data generation, data selection with Round-Trip prediction to identify easy-to-learn instances, and retriever training with these instances. We evaluate RPDR on two long-tail retrieval benchmarks, PopQA and EntityQuestion, demonstrating substantial improvements over existing retrievers like BM25 and Contriver, especially on extremely long-tail categories. We identify the strengths and limitations of RPDR through detailed human analysis and propose a dynamic routing mechanism to dynamically route queries to specialized retrieval modules to further improve retrieval performance.