LLMs instead of Human Judges? A Large Scale Empirical Study across 20 NLP Evaluation Tasks

📅 2024-06-26
🏛️ arXiv.org
📈 Citations: 69
Influential: 3
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) can reliably replace human annotators for evaluating NLP models. Method: We introduce JUDGE-BENCH—the first large-scale, multi-task, multi-dimensional automatic evaluation benchmark with high-quality human annotations—and systematically assess the effectiveness and consistency of 11 state-of-the-art LLMs as automatic evaluators across 20 NLP tasks. Our methodology integrates human annotation quality analysis, statistical significance testing, and cross-model correlation metrics (Kendall’s τ and Spearman’s ρ). Contribution/Results: LLM-based evaluation performance is highly contingent on evaluation attributes, annotator expertise level, and text source; while LLMs approximate human judgments in certain tasks, they lack universal reliability. Human annotations remain indispensable as the gold standard for pre-validation. We publicly release JUDGE-BENCH—including all human annotations, model outputs, and evaluation scripts—to advance standardized, reproducible research on LLM-based evaluation.

Technology Category

Application Category

📝 Abstract
There is an increasing trend towards evaluating NLP models with LLMs instead of human judgments, raising questions about the validity of these evaluations, as well as their reproducibility in the case of proprietary models. We provide JUDGE-BENCH, an extensible collection of 20 NLP datasets with human annotations covering a broad range of evaluated properties and types of data, and comprehensively evaluate 11 current LLMs, covering both open-weight and proprietary models, for their ability to replicate the annotations. Our evaluations show substantial variance across models and datasets. Models are reliable evaluators on some tasks, but overall display substantial variability depending on the property being evaluated, the expertise level of the human judges, and whether the language is human or model-generated. We conclude that LLMs should be carefully validated against human judgments before being used as evaluators.
Problem

Research questions and friction points this paper is trying to address.

Assessing validity of LLMs replacing human judges in NLP evaluations
Evaluating reproducibility of proprietary vs open-weight LLM models
Measuring variance in LLM performance across diverse NLP tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs replace human judges in evaluations
JUDGE-BENCH: 20 NLP datasets with human annotations
Evaluate 11 LLMs for annotation replication
🔎 Similar Papers
No similar papers found.
A
Anna Bavaresco
University of Amsterdam
Raffaella Bernardi
Raffaella Bernardi
Free University of Bolzano Bozen
Language Grounding to VisionDialoguesReasoningSyntax-Semantics interface
L
Leonardo Bertolazzi
University of Trento
Desmond Elliott
Desmond Elliott
Associate Professor, University of Copenhagen
Natural Language ProcessingVision-LanguageTokenization-free Language Models
R
Raquel Fern'andez
University of Amsterdam
Albert Gatt
Albert Gatt
Professor of Natural Language Generation, Utrecht University
Computational LinguisticsNatural Language GenerationVision and LanguageLanguage Production
E
E. Ghaleb
University of Amsterdam
Mario Giulianelli
Mario Giulianelli
Associate Professor, UCL
Computational LinguisticsLanguage ModellingAI Evaluation
M
Michael Hanna
University of Amsterdam
Alexander Koller
Alexander Koller
Professor of Computational Linguistics, Saarland University, Saarland Informatics Campus
Computational linguisticsartificial intelligence
A
Andr'e F. T. Martins
Universidade de Lisboa & Unbabel
Philipp Mondorf
Philipp Mondorf
Ph.D. Student, Meta FAIR | LMU Munich
Reasoning in LLMs and humans.
V
Vera Neplenbroek
University of Amsterdam
Sandro Pezzelle
Sandro Pezzelle
Assistant Professor at ILLC, University of Amsterdam
Natural Language ProcessingMultimodal Machine LearningAICognitive science
Barbara Plank
Barbara Plank
Professor, LMU Munich, Visiting Prof ITU Copenhagen
Natural Language ProcessingComputational LinguisticsMachine LearningTransfer Learning
David Schlangen
David Schlangen
Professor, "Foundations of Computational Linguistics", University of Potsdam
Computational LinguisticsArtificial IntelligenceConversational AgentsDialogue Systems
Alessandro Suglia
Alessandro Suglia
Assistant Professor, Heriot-Watt University, Edinburgh Centre for Robotics, National Robotarium
Multimodal Generative AIEmbodied AIConversational AI
A
Aditya K Surikuchi
University of Amsterdam
Ece Takmaz
Ece Takmaz
Utrecht University
Natural Language ProcessingLanguage and VisionPerception and ActionArtificial Intelligence
A
A. Testoni
University of Amsterdam