🤖 AI Summary
Traditional information retrieval (IR) evaluation relies heavily on costly, labor-intensive manual relevance judgments, limiting scalability. While leveraging large language models (LLMs) for automated relevance scoring offers efficiency gains, end-to-end LLM-based approaches suffer from unreliability and poor interpretability. To address these limitations, this work proposes Multi-Criteria, a framework that decomposes relevance into four interpretable dimensions—precision, coverage, topicality, and contextual alignment—and employs structured prompting to elicit calibrated, dimension-specific LLM scores, which are then aggregated. This decomposition enhances judgment robustness and ranking consistency. Experiments on TREC Deep Learning 2019/2020 and LLMJudge benchmarks demonstrate that Multi-Criteria significantly outperforms end-to-end LLM scoring baselines in leaderboard evaluation metrics, achieving higher correlation with human judgments. The framework enables efficient, scalable, and trustworthy automated IR evaluation without sacrificing interpretability or reliability.
📝 Abstract
Relevance judgments are crucial for evaluating information retrieval systems, but traditional human-annotated labels are time-consuming and expensive. As a result, many researchers turn to automatic alternatives to accelerate method development. Among these, Large Language Models (LLMs) provide a scalable solution by generating relevance labels directly through prompting. However, prompting an LLM for a relevance label without constraints often results in not only incorrect predictions but also outputs that are difficult for humans to interpret. We propose the Multi-Criteria framework for LLM-based relevance judgments, decomposing the notion of relevance into multiple criteria--such as exactness, coverage, topicality, and contextual fit--to improve the robustness and interpretability of retrieval evaluations compared to direct grading methods. We validate this approach on three datasets: the TREC Deep Learning tracks from 2019 and 2020, as well as LLMJudge (based on TREC DL 2023). Our results demonstrate that Multi-Criteria judgments enhance the system ranking/leaderboard performance. Moreover, we highlight the strengths and limitations of this approach relative to direct grading approaches, offering insights that can guide the development of future automatic evaluation frameworks in information retrieval.