Do LLM-judges Align with Human Relevance in Cranfield-style Recommender Evaluation?

📅 2025-11-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Offline evaluation of recommender systems suffers from instability due to exposure bias, popularity bias, and missing-not-at-random (MNAR) data, while Cranfield-style manual relevance assessment is costly and poorly scalable. This paper presents the first systematic validation of large language models (LLMs) as automated relevance judges in Cranfield-style evaluation. We construct a test set based on ML-32M-ext, enriching LLM inputs with item metadata and user interaction history to improve contextual understanding. Experiments across movie and industrial podcast domains demonstrate strong agreement between LLM-based judgments and human annotations, achieving Kendall’s tau = 0.87 in ranking consistency—significantly outperforming conventional baselines. Moreover, LLM-judges successfully support model selection in real-world deployment. Our work establishes a novel, efficient, reliable, and scalable LLM-driven paradigm for recommender system evaluation.

Technology Category

Application Category

📝 Abstract
Evaluating recommender systems remains a long-standing challenge, as offline methods based on historical user interactions and train-test splits often yield unstable and inconsistent results due to exposure bias, popularity bias, sampled evaluations, and missing-not-at-random patterns. In contrast, textual document retrieval benefits from robust, standardized evaluation via Cranfield-style test collections, which combine pooled relevance judgments with controlled setups. While recent work shows that adapting this methodology to recommender systems is feasible, constructing such collections remains costly due to the need for manual relevance judgments, thus limiting scalability. This paper investigates whether Large Language Models (LLMs) can serve as reliable automatic judges to address these scalability challenges. Using the ML-32M-ext Cranfield-style movie recommendation collection, we first examine the limitations of existing evaluation methodologies. Then we explore the alignment and the recommender systems ranking agreement between the LLM-judge and human provided relevance labels. We find that incorporating richer item metadata and longer user histories improves alignment, and that LLM-judge yields high agreement with human-based rankings (Kendall's tau = 0.87). Finally, an industrial case study in the podcast recommendation domain demonstrates the practical value of LLM-judge for model selection. Overall, our results show that LLM-judge is a viable and scalable approach for evaluating recommender systems.
Problem

Research questions and friction points this paper is trying to address.

Evaluating recommender systems faces unstable offline results
Manual relevance judgments limit Cranfield-style evaluation scalability
LLMs may replace human judges for scalable evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs serve as automatic judges for evaluation
Incorporating rich metadata improves human alignment
LLM-judge achieves high agreement with human rankings
🔎 Similar Papers
No similar papers found.