🤖 AI Summary
This work addresses the tendency of large language models (LLMs) to overestimate relevance and exhibit low consistency when assessing document relevance based solely on queries. To mitigate this, the study proposes a method that leverages LLMs to automatically generate formal, human-aligned information need statements—comprising both descriptions and narratives—that structurally guide relevance judgments. This is the first systematic investigation demonstrating that such formalized topics significantly enhance the reliability of LLM-based evaluation. The approach also offers an automatic synthesis strategy for scenarios where human-authored topics are unavailable. Experimental results show that the proposed method substantially reduces over-relevance judgments and improves both inter-LLM agreement and alignment between LLMs and human assessors, thereby strengthening the robustness of retrieval evaluation.
📝 Abstract
Cranfield-style retrieval evaluations with too few or too many relevant documents or with low inter-assessor agreement on relevance can reduce the reliability of observations. In evaluations with human assessors, information needs are often formalized as retrieval topics to avoid an excessive number of relevant documents while maintaining good agreement. However, emerging evaluation setups that use Large Language Models (LLMs) as relevance assessors often use only queries, potentially decreasing the reliability. To study whether LLM relevance assessors benefit from formalized information needs, we synthetically formalize information needs with LLMs into topics that follow the established structure from previous human relevance assessments (i.e., descriptions and narratives). We compare assessors using synthetically formalized topics against the LLM-default query-only assessor on Robust04 and the 2019/2020 editions of TREC Deep Learning. We find that assessors without formalization judge many more documents relevant and have a lower agreement, leading to reduced reliability in retrieval evaluations. Furthermore, we show that the formalized topics improve agreement between human and LLM relevance judgments, even when the topics are not highly similar to their human counterparts. Our findings indicate that LLM relevance assessors should use formalized information needs, as is standard for human assessment, and synthetically formalize topics when no human formalization exists to improve evaluation reliability.