🤖 AI Summary
This study investigates whether large language models (LLMs) can reliably substitute human annotators and raters in labeling and scoring tasks across linguistics, medicine, psychology, and social sciences. To this end, we propose the “alternative rater test” (alt-test)—a lightweight, interpretable statistical framework requiring only a small set of human annotations to validate LLM substitutability. Our key contributions are: (1) a cross-model comparable metric for quantifying LLM rater quality; (2) the first multi-domain benchmark comprising ten datasets spanning language and vision-language tasks; and (3) an open-source alt-test toolkit. Experiments with models including GPT-4o and prompting strategies such as chain-of-thought and self-refinement demonstrate that, in certain scenarios, proprietary LLMs achieve inter-rater agreement and accuracy on par with or exceeding human annotators—particularly in NLP and medical tasks—while prompting design critically influences both consistency and accuracy.
📝 Abstract
The"LLM-as-a-judge"paradigm employs Large Language Models (LLMs) as annotators and evaluators in tasks traditionally performed by humans. LLM annotations are widely used, not only in NLP research but also in fields like medicine, psychology, and social science. Despite their role in shaping study results and insights, there is no standard or rigorous procedure to determine whether LLMs can replace human annotators. In this paper, we propose a novel statistical procedure -- the Alternative Annotator Test (alt-test) -- that requires only a modest subset of annotated examples to justify using LLM annotations. Additionally, we introduce a versatile and interpretable measure for comparing LLM judges. To demonstrate our procedure, we curated a diverse collection of ten datasets, consisting of language and vision-language tasks, and conducted experiments with six LLMs and four prompting techniques. Our results show that LLMs can sometimes replace humans with closed-source LLMs (such as GPT-4o), outperforming open-source LLMs, and that prompting techniques yield judges of varying quality. We hope this study encourages more rigorous and reliable practices.