Variations in Relevance Judgments and the Shelf Life of Test Collections

📅 2025-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The rise of neural retrieval has introduced new characteristics in modern test collections—shorter documents, finer-grained relevance judgments, and ambiguous information needs—raising dual concerns: whether assessor disagreement compromises the validity of system comparisons, and whether repeated usage renders test collections “outdated.” Method: We conducted large-scale relevance re-annotation of the TREC 2019 Deep Learning track data and applied Cranfield-style evaluation coupled with rigorous statistical testing. Contribution/Results: We provide the first empirical evidence of a “shelf life” for test collections: performance of multiple SOTA models dropped by 5–12% under the new annotations; several top-performing models now approach human sorting ceilings, indicating severe overfitting to prior annotation interpretations and consequent evaluation breakdown. Crucially, inter-annotator disagreement did not significantly perturb relative system rankings. These findings underscore the necessity of dynamic test collection maintenance and establish foundational evidence for evaluation robustness in neural IR.

Technology Category

Application Category

📝 Abstract
The fundamental property of Cranfield-style evaluations, that system rankings are stable even when assessors disagree on individual relevance decisions, was validated on traditional test collections. However, the paradigm shift towards neural retrieval models affected the characteristics of modern test collections, e.g., documents are short, judged with four grades of relevance, and information needs have no descriptions or narratives. Under these changes, it is unclear whether assessor disagreement remains negligible for system comparisons. We investigate this aspect under the additional condition that the few modern test collections are heavily re-used. Given more possible query interpretations due to less formalized information needs, an ''expiration date'' for test collections might be needed if top-effectiveness requires overfitting to a single interpretation of relevance. We run a reproducibility study and re-annotate the relevance judgments of the 2019 TREC Deep Learning track. We can reproduce prior work in the neural retrieval setting, showing that assessor disagreement does not affect system rankings. However, we observe that some models substantially degrade with our new relevance judgments, and some have already reached the effectiveness of humans as rankers, providing evidence that test collections can expire.
Problem

Research questions and friction points this paper is trying to address.

Assessor disagreement impact on system rankings in neural retrieval models.
Need for expiration dates in modern test collections due to overfitting.
Reproducibility of relevance judgments in TREC Deep Learning track.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reproducibility study on neural retrieval models
Re-annotation of TREC Deep Learning track judgments
Investigation of test collection expiration effects
🔎 Similar Papers
No similar papers found.