🤖 AI Summary
This work investigates prompt sensitivity and evaluation robustness of large language models (LLMs) in information retrieval relevance assessment. To address this, we construct the first large-scale, human-expert–LLM–collaborative prompt set—comprising 90 prompts generated jointly by domain experts and LLMs (Llama-3, GPT-4, Claude-3)—and systematically evaluate consistency across binary, graded, and pairwise judgment tasks on the TREC Deep Learning Track (2020/2021) and UMBRELA benchmarks. We propose a prompt-aware evaluation framework and find that prompt variation induces substantial fluctuations in LLM label agreement, with Cohen’s κ varying by up to ±0.28. Human-crafted prompts consistently outperform LLM-generated ones by an average of 0.11 κ. All prompts and corresponding annotations are publicly released, establishing a reproducible, empirically grounded paradigm for rigorous LLM-based relevance evaluation.
📝 Abstract
Large Language Models (LLMs) are increasingly used to automate relevance judgments for information retrieval (IR) tasks, often demonstrating agreement with human labels that approaches inter-human agreement. To assess the robustness and reliability of LLM-based relevance judgments, we systematically investigate impact of prompt sensitivity on the task. We collected prompts for relevance assessment from 15 human experts and 15 LLMs across three tasks~ -- ~binary, graded, and pairwise~ -- ~yielding 90 prompts in total. After filtering out unusable prompts from three humans and three LLMs, we employed the remaining 72 prompts with three different LLMs as judges to label document/query pairs from two TREC Deep Learning Datasets (2020 and 2021). We compare LLM-generated labels with TREC official human labels using Cohen's $kappa$ and pairwise agreement measures. In addition to investigating the impact of prompt variations on agreement with human labels, we compare human- and LLM-generated prompts and analyze differences among different LLMs as judges. We also compare human- and LLM-generated prompts with the standard UMBRELA prompt used for relevance assessment by Bing and TREC 2024 Retrieval Augmented Generation (RAG) Track. To support future research in LLM-based evaluation, we release all data and prompts at https://github.com/Narabzad/prompt-sensitivity-relevance-judgements/.