When LLMs Disagree: Diagnosing Relevance Filtering Bias and Retrieval Divergence in SDG Search

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates inter-model disagreement among large language models (LLMs) when performing relevance labeling of academic abstracts related to the Sustainable Development Goals (SDGs) in low-resource, human-annotation-free settings, and its impact on downstream retrieval. Focusing on two open-source LLM families—LLaMA and Qwen—we propose a novel evaluation paradigm centered on “inter-model classification disagreement” as the analytical unit, integrating lexical pattern mining, ranking consistency analysis, and AUC-based separability assessment. We find that disagreement is not random but exhibits systematic semantic and structural patterns (AUC > 0.74), persisting even under identical ranking logic and yielding significantly divergent top-ranked retrieval results. This study is the first to characterize the structural variability introduced by LLM-based labeling, providing both theoretical foundations and practical guidelines for trustworthy retrieval and collaborative model-assisted annotation.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly used to assign document relevance labels in information retrieval pipelines, especially in domains lacking human-labeled data. However, different models often disagree on borderline cases, raising concerns about how such disagreement affects downstream retrieval. This study examines labeling disagreement between two open-weight LLMs, LLaMA and Qwen, on a corpus of scholarly abstracts related to Sustainable Development Goals (SDGs) 1, 3, and 7. We isolate disagreement subsets and examine their lexical properties, rank-order behavior, and classification predictability. Our results show that model disagreement is systematic, not random: disagreement cases exhibit consistent lexical patterns, produce divergent top-ranked outputs under shared scoring functions, and are distinguishable with AUCs above 0.74 using simple classifiers. These findings suggest that LLM-based filtering introduces structured variability in document retrieval, even under controlled prompting and shared ranking logic. We propose using classification disagreement as an object of analysis in retrieval evaluation, particularly in policy-relevant or thematic search tasks.
Problem

Research questions and friction points this paper is trying to address.

Analyze LLM disagreement in document relevance labeling
Examine systematic bias in SDG-related scholarly abstracts
Propose disagreement as a retrieval evaluation metric
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyze LLM disagreement on document relevance labels
Examine lexical patterns in disagreement subsets
Propose classification disagreement for retrieval evaluation
🔎 Similar Papers
No similar papers found.