Training Dense Retrievers with Multiple Positive Passages

📅 2026-02-13
📈 Citations: 0
Influential: 0
📄 PDF

Technology Category

Application Category

📝 Abstract
Modern knowledge-intensive systems, such as retrieval-augmented generation (RAG), rely on effective retrievers to establish the performance ceiling for downstream modules. However, retriever training has been bottlenecked by sparse, single-positive annotations, which lead to false-negative noise and suboptimal supervision. While the advent of large language models (LLMs) makes it feasible to collect comprehensive multi-positive relevance labels at scale, the optimal strategy for incorporating these dense signals into training remains poorly understood. In this paper, we present a systematic study of multi-positive optimization objectives for retriever training. We unify representative objectives, including Joint Likelihood (JointLH), Summed Marginal Likelihood (SumMargLH), and Log-Sum-Exp Pairwise (LSEPair) loss, under a shared contrastive learning framework. Our theoretical analysis characterizes their distinct gradient behaviors, revealing how each allocates probability mass across positive document sets. Empirically, we conduct extensive evaluations on Natural Questions, MS MARCO, and the BEIR benchmark across two realistic regimes: homogeneous LLM-annotated data and heterogeneous mixtures of human and LLM labels. Our results show that LSEPair consistently achieves superior robustness and performance across settings, while JointLH and SumMargLH exhibit high sensitivity to the quality of positives. Furthermore, we find that the simple strategy of random sampling (Rand1LH) serves as a reliable baseline. By aligning theoretical insights with empirical findings, we provide practical design principles for leveraging dense, LLM-augmented supervision to enhance retriever effectiveness.
Problem

Research questions and friction points this paper is trying to address.

dense retrieval
multi-positive supervision
retriever training
false-negative noise
LLM-augmented labels
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-positive retrieval
dense retriever
LLM-augmented supervision
contrastive learning
LSEPair loss
🔎 Similar Papers
No similar papers found.
B
Benben Wang
Xidian University
M
Minghao Tang
State Key Laboratory of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences
H
Hengran Zhang
State Key Laboratory of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Jiafeng Guo
Jiafeng Guo
Professor, Institute of Computing Techonology, CAS
Information RetrievalMachine LearningText AnalysisNeuIR
Keping Bi
Keping Bi
Institute of Computing Technology, Chinese Academy of Sciences
Information Retrieval