🤖 AI Summary
Dense retrievers for open-domain question answering (OpenQA) suffer from limited training data due to the scarcity of human-annotated supporting evidence.
Method: This paper proposes a weakly supervised training framework leveraging large language models (LLMs): pseudo-positive passages are constructed by combining BM25 initial retrieval results with LLM-generated conditional answer probabilities, and the dense retriever is optimized end-to-end via contrastive learning.
Contribution/Results: To our knowledge, this is the first work to employ LLM-generated confidence scores as scalable, annotation-free weak supervision signals—eliminating reliance on manual evidence labels. Evaluated on four standard OpenQA benchmarks, our method achieves retrieval accuracy and end-to-end QA performance competitive with fully supervised baselines, demonstrating both effectiveness and strong generalization across diverse domains.
📝 Abstract
In knowledge-intensive tasks such as open-domain question answering (OpenQA), large language models (LLMs) often struggle to generate factual answers, relying solely on their internal (parametric) knowledge. To address this limitation, Retrieval-Augmented Generation (RAG) systems enhance LLMs by retrieving relevant information from external sources, thereby positioning the retriever as a pivotal component. Although dense retrieval demonstrates state-of-the-art performance, its training poses challenges due to the scarcity of ground-truth evidence, largely attributed to the high costs of human annotation. In this paper, we propose W-RAG, a method that draws weak training signals from the downstream task (such as OpenQA) of an LLM, and fine-tunes the retriever to prioritize passages that most benefit the task. Specifically, we rerank the top-$k$ passages retrieved via BM25 by assessing the probability that the LLM will generate the correct answer for a question given each passage. The highest-ranking passages are then used as positive fine-tuning examples for dense retrieval. We conduct comprehensive experiments across four publicly available OpenQA datasets to demonstrate that our approach enhances both retrieval and OpenQA performance compared to baseline models, achieving results comparable to models fine-tuned with human-labeled data.