Overview of the TREC 2021 deep learning track

๐Ÿ“… 2025-07-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study addresses performance degradation in large-scale pretrained deep neural ranking models for document and passage retrieval, caused by declining NIST judgment completeness and misalignment of legacy relevance labels due to dataset refreshes and corpus expansion. Method: Leveraging the updated MS MARCO dataset, we systematically compare end-to-end single-stage ranking against multi-stage retrieval pipelines. We quantifyโ€” for the first timeโ€”the impact of label incompleteness on training signal quality and propose a label migration quality assessment framework. Our approach innovatively integrates large pretrained language models with a hierarchical retrieval architecture, preserving single-stage efficiency while rigorously validating the irreplaceable accuracy advantage of multi-stage pipelines. Contribution/Results: Experiments demonstrate significant improvements over conventional baselines. The work establishes a novel paradigm for enhancing model robustness and annotation sustainability in dynamically evolving retrieval scenarios, offering principled solutions to label decay and architectural scalability.

Technology Category

Application Category

๐Ÿ“ Abstract
This is the third year of the TREC Deep Learning track. As in previous years, we leverage the MS MARCO datasets that made hundreds of thousands of human annotated training labels available for both passage and document ranking tasks. In addition, this year we refreshed both the document and the passage collections which also led to a nearly four times increase in the document collection size and nearly $16$ times increase in the size of the passage collection. Deep neural ranking models that employ large scale pretraininig continued to outperform traditional retrieval methods this year. We also found that single stage retrieval can achieve good performance on both tasks although they still do not perform at par with multistage retrieval pipelines. Finally, the increase in the collection size and the general data refresh raised some questions about completeness of NIST judgments and the quality of the training labels that were mapped to the new collections from the old ones which we discuss in this report.
Problem

Research questions and friction points this paper is trying to address.

Evaluating deep neural ranking models for document and passage retrieval
Assessing performance of single-stage vs multistage retrieval pipelines
Analyzing impact of dataset size increase and label quality issues
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverage MS MARCO datasets for training
Use large scale pretrained neural models
Employ single stage retrieval method
๐Ÿ”Ž Similar Papers
No similar papers found.