Entities as Retrieval Signals: A Systematic Study of Coverage, Supervision, and Evaluation in Entity-Oriented Ranking

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the poor performance of entity-oriented retrieval in open-world evaluation, arguing that the root cause lies in the evaluation paradigm rather than model architecture. It introduces a novel distinction between Conceptual Entity Relevance (CER) and Observable Entity Relevance (OER), revealing that current supervised methods lack discriminative power due to their neglect of linking context, and highlighting an inherent tension between entity coverage and discriminative capability. By integrating a neural re-ranker, unsupervised entity grounding, and a BM25 baseline, the authors establish a joint evaluation framework on TREC Robust04 that assesses both coverage and effectiveness. Experiments across 443 systems show none significantly improve MAP under open-world conditions; the best configuration matches the official top-performing system, yet entity signals cover only 19.7% of relevant documents—demonstrating that architectural limitations are not the bottleneck and underscoring the urgent need for a new evaluation paradigm.
📝 Abstract
Entity-oriented retrieval assumes that relevant documents exhibit query-relevant entities, yet evaluations report conflicting results. We show this inconsistency stems not from model failure, but from evaluation. On TREC Robust04, we evaluate six neural rerankers and 437 unsupervised configurations against BM25. Across 443 systems, none improves MAP by more than 0.05 under open-world evaluation over the full candidate set, despite strong gains under entity-restricted settings. The best configuration matches the official Robust04 best system and outperforms most neural rerankers, indicating that the architecture is not the limiting factor. Instead, the bottleneck is the entity channel: even under idealized selection, entity signals cover only 19.7\% of relevant documents, and no method achieves both high coverage and discrimination. We explain this via a distinction between Conceptual Entity Relevance (CER) -- semantic relatedness -- and Observable Entity Relevance (OER) -- corpus-grounded discriminativeness under a given linker. All supervision strategies operate at the CER level and ignore the linking environment, leading to signals that are semantically valid but not discriminative. Improving supervision therefore does not recover open-world performance: stronger signals reduce coverage without improving effectiveness. Conditional and open-world evaluation answer different questions: exploiting entity evidence versus improving retrieval under realistic linking, but are often conflated. Progress requires datasets with entity-level discriminativeness and evaluation that reports both coverage and effectiveness. Until then, conditional gains do not imply open-world effectiveness, and open-world failures do not invalidate entity-based models.
Problem

Research questions and friction points this paper is trying to address.

entity-oriented retrieval
coverage
discriminativeness
open-world evaluation
evaluation methodology
Innovation

Methods, ideas, or system contributions that make the work stand out.

entity-oriented retrieval
open-world evaluation
entity coverage
conceptual vs observable relevance
retrieval supervision
🔎 Similar Papers
No similar papers found.