🤖 AI Summary
Existing LLM-based relevance judgment methods heavily rely on prompt engineering and lack standardized, reproducible annotation protocols. Method: We propose TRUE—a first-of-its-kind reproducible evaluation framework for relevance assessment—integrating task-aware scoring criteria, iterative data sampling, and multi-dimensional reasoning (encompassing intent understanding, coverage, and factual accuracy). Unlike conventional single-shot prompting, TRUE employs structured reasoning chains to enhance label consistency and reliability. Contribution/Results: Experiments on TREC DL 2019/2020 and LLMJudge demonstrate that TRUE-generated automatic labels achieve strong agreement with human judgments (Spearman’s ρ > 0.85) and consistently rank among the top performers on mainstream LLM leaderboards. TRUE effectively mitigates prompt sensitivity and procedural fragmentation, establishing a robust, transparent, and scalable foundation for relevance evaluation.
📝 Abstract
LLM-based relevance judgment generation has become a crucial approach in advancing evaluation methodologies in Information Retrieval (IR). It has progressed significantly, often showing high correlation with human judgments as reflected in LLMJudge leaderboards cite{rahmani2025judging}. However, existing methods for relevance judgments, rely heavily on sensitive prompting strategies, lacking standardized workflows for generating reliable labels. To fill this gap, we reintroduce our method, extit{Task-aware Rubric-based Evaluation} (TRUE), for relevance judgment generation. Originally developed for usefulness evaluation in search sessions, we extend TRUE to mitigate the gap in relevance judgment due to its demonstrated effectiveness and reproducible workflow. This framework leverages iterative data sampling and reasoning to evaluate relevance judgments across multiple factors including intent, coverage, specificity, accuracy and usefulness. In this paper, we evaluate TRUE on the TREC DL 2019, 2020 and LLMJudge datasets and our results show that TRUE achieves strong performance on the system-ranking LLM leaderboards. The primary focus of this work is to provide a reproducible framework for LLM-based relevance judgments, and we further analyze the effectiveness of TRUE across multiple dimensions.