LLM-Assisted Relevance Assessments: When Should We Ask LLMs for Help?

📅 2024-11-11
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Manual relevance annotation for information retrieval test collections is costly and scale-limited, leading to unstable evaluation; conversely, LLM-generated labels (e.g., from GPT-4) suffer from systematic bias. Method: We propose LARA, a human-AI collaborative relevance annotation framework featuring a budget-aware active calibration mechanism: it dynamically selects documents with maximal information gain for human annotation and performs bias modeling and probabilistic calibration (via Platt scaling or binning) on LLM outputs using minimal human feedback. Results: Evaluated on four standard benchmarks—TREC-7, TREC-8, Robust2004, and COVID—LARA consistently outperforms fully manual, fully LLM-based, and existing hybrid approaches across diverse annotation budgets. It achieves up to 32% improvement in evaluation stability while enabling scalable, low-cost, high-fidelity, and low-bias test collection construction.

Technology Category

Application Category

📝 Abstract
Test collections are information retrieval tools that allow researchers to quickly and easily evaluate ranking algorithms. While test collections have become an integral part of IR research, the process of data creation involves significant effort in manual annotations, which often makes it very expensive and time-consuming. Thus, test collections could become too small when the budget is limited, which may lead to unstable evaluations. As a cheaper alternative, recent studies have proposed the use of large language models (LLMs) to completely replace human assessors. However, while LLMs may seem to somewhat correlate with human judgments, their predictions are not perfect and often show bias. Thus a complete replacement with LLMs is argued to be too risky and not fully reliable. Thus, in this paper, we propose LLM-Assisted Relevance Assessments (LARA), an effective method to balance manual annotations with LLM annotations, which helps to build a rich and reliable test collection even under a low budget. We use the LLM's predicted relevance probabilities to select the most profitable documents to manually annotate under a budget constraint. With theoretical reasoning, LARA effectively guides the human annotation process by actively learning to calibrate the LLM's predicted relevance probabilities. Then, using the calibration model learned from the limited manual annotations, LARA debiases the LLM predictions to annotate the remaining non-assessed data. Empirical evaluations on TREC-7 Ad Hoc, TREC-8 Ad Hoc, TREC Robust 2004, and TREC-COVID datasets show that LARA outperforms alternative solutions under almost any budget constraint.
Problem

Research questions and friction points this paper is trying to address.

Information Retrieval
Budget-Constrained Evaluation
Human-Machine Collaboration
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-Assisted Relevance Assessment
Budget-Constrained Test Set Construction
Bias Mitigation in LLM Predictions
R
Rikiya Takehi
Waseda University, Tokyo, Japan
E
E. Voorhees
National Institute of Standards and Technology, Emeritus, Maryland, United States
Tetsuya Sakai
Tetsuya Sakai
Waseda University
information retrievalinteractionnatural language processingsocial good