๐ค AI Summary
This study addresses performance degradation in large-scale pretrained deep neural ranking models for document and passage retrieval, caused by declining NIST judgment completeness and misalignment of legacy relevance labels due to dataset refreshes and corpus expansion.
Method: Leveraging the updated MS MARCO dataset, we systematically compare end-to-end single-stage ranking against multi-stage retrieval pipelines. We quantifyโ for the first timeโthe impact of label incompleteness on training signal quality and propose a label migration quality assessment framework. Our approach innovatively integrates large pretrained language models with a hierarchical retrieval architecture, preserving single-stage efficiency while rigorously validating the irreplaceable accuracy advantage of multi-stage pipelines.
Contribution/Results: Experiments demonstrate significant improvements over conventional baselines. The work establishes a novel paradigm for enhancing model robustness and annotation sustainability in dynamically evolving retrieval scenarios, offering principled solutions to label decay and architectural scalability.
๐ Abstract
This is the third year of the TREC Deep Learning track. As in previous years, we leverage the MS MARCO datasets that made hundreds of thousands of human annotated training labels available for both passage and document ranking tasks. In addition, this year we refreshed both the document and the passage collections which also led to a nearly four times increase in the document collection size and nearly $16$ times increase in the size of the passage collection. Deep neural ranking models that employ large scale pretraininig continued to outperform traditional retrieval methods this year. We also found that single stage retrieval can achieve good performance on both tasks although they still do not perform at par with multistage retrieval pipelines. Finally, the increase in the collection size and the general data refresh raised some questions about completeness of NIST judgments and the quality of the training labels that were mapped to the new collections from the old ones which we discuss in this report.