🤖 AI Summary
Subjective NLP tasks—such as hate speech detection—pose challenges for reliable, scalable evaluation due to their reliance on costly human annotation. This work investigates whether large language models (LLMs) can serve as trustworthy proxy evaluators: not replacing human annotators, but reliably reproducing human-derived model performance rankings.
Method: We introduce the cross-annotator reliability (xRR) framework to LLM evaluation for the first time, integrating zero-shot and few-shot LLM labeling, Kendall’s τ rank correlation analysis, and multi-model comparative evaluation across multiple hate speech datasets.
Contribution/Results: Empirical results demonstrate strong rank-order agreement (τ > 0.85) between LLM-generated labels and human annotations in model performance ranking. This confirms LLMs’ viability as efficient, scalable proxy evaluators for subjective NLP tasks, establishing a methodological advance for objective evaluation under subjectivity constraints.
📝 Abstract
Hate speech spreads widely online, harming individuals and communities, making automatic detection essential for large-scale moderation, yet detecting it remains difficult. Part of the challenge lies in subjectivity: what one person flags as hate speech, another may see as benign. Traditional annotation agreement metrics, such as Cohen's $κ$, oversimplify this disagreement, treating it as an error rather than meaningful diversity. Meanwhile, Large Language Models (LLMs) promise scalable annotation, but prior studies demonstrate that they cannot fully replace human judgement, especially in subjective tasks. In this work, we reexamine LLM reliability using a subjectivity-aware framework, cross-Rater Reliability (xRR), revealing that even under fairer lens, LLMs still diverge from humans. Yet this limitation opens an opportunity: we find that LLM-generated annotations can reliably reflect performance trends across classification models, correlating with human evaluations. We test this by examining whether LLM-generated annotations preserve the relative ordering of model performance derived from human evaluation (i.e. whether models ranked as more reliable by human annotators preserve the same order when evaluated with LLM-generated labels). Our results show that, although LLMs differ from humans at the instance level, they reproduce similar ranking and classification patterns, suggesting their potential as proxy evaluators. While not a substitute for human annotators, they might serve as a scalable proxy for evaluation in subjective NLP tasks.