🤖 AI Summary
To address the limitation of conventional relevance labels in capturing user intent and task objectives in information retrieval, this paper proposes an LLM-based automatic document utility annotation method. Methodologically, it integrates multi-granularity features—including task context, query, document content, and user behavioral signals—and systematically investigates, for the first time, LLMs’ semantic discrimination capability between “utility” and “relevance.” It further incorporates search session context to improve judgment accuracy in short-session scenarios. Key contributions include: (1) empirical validation that session context boosts annotation accuracy for short tasks by 23.6%; (2) identification of click depth and result re-ranking as critical behavioral discriminators for utility assessment; and (3) development of an interpretable ablation analysis framework, demonstrating that LLMs consistently generate utility labels of medium-to-high quality. The approach advances utility-aware IR evaluation by grounding annotations in both behavioral evidence and contextual semantics.
📝 Abstract
In the information retrieval (IR) domain, evaluation plays a crucial role in optimizing search experiences and supporting diverse user intents. In the recent LLM era, research has been conducted to automate document relevance labels, as these labels have traditionally been assigned by crowd-sourced workers - a process that is both time and consuming and costly. This study focuses on LLM-generated usefulness labels, a crucial evaluation metric that considers the user's search intents and task objectives, an aspect where relevance falls short. Our experiment utilizes task-level, query-level, and document-level features along with user search behavior signals, which are essential in defining the usefulness of a document. Our research finds that (i) pre-trained LLMs can generate moderate usefulness labels by understanding the comprehensive search task session, (ii) pre-trained LLMs perform better judgement in short search sessions when provided with search session contexts. Additionally, we investigated whether LLMs can capture the unique divergence between relevance and usefulness, along with conducting an ablation study to identify the most critical metrics for accurate usefulness label generation. In conclusion, this work explores LLM-generated usefulness labels by evaluating critical metrics and optimizing for practicality in real-world settings.