DHP Benchmark: Are LLMs Good NLG Evaluators?

📅 2024-08-25
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing NLG evaluation heavily relies on human annotations and shallow metrics, failing to systematically assess large language models’ (LLMs’) discriminative capability as automated evaluators. Method: We propose the Discriminative Hierarchical Perturbation (DHP) benchmark framework, which quantifies zero-shot and few-shot judgment performance of LLMs across four tasks—summarization, story completion, question answering, and machine translation—via controlled hierarchical text perturbations, multi-task data reconstruction, and nonparametric hypothesis testing (e.g., Wilcoxon signed-rank test). Contribution/Results: DHP enables the first cross-task, reproducible, annotation-free evaluation of LLM-as-a-judge capability. Empirical analysis across five major LLM families reveals pronounced task dependency and systematic discriminative blind spots. We publicly release six reconstructed datasets to advance trustworthy, standardized automatic evaluation research.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly serving as evaluators in Natural Language Generation (NLG) tasks; this is often referred to as ``LLM-as-a-judge'' paradigm. However, the capabilities of LLMs in evaluating NLG quality remain underexplored. Current studies depend on human assessments and simple metrics that fail to capture the discernment of LLMs across diverse NLG tasks. To address this gap, we propose the Discernment of Hierarchical Perturbation (DHP) benchmarking framework, which provides quantitative discernment scores for LLMs. This framework leverages hierarchically perturbed text data and statistical tests to systematically measure the NLG evaluation capabilities of LLMs. We re-established six evaluation datasets for this benchmark, covering four NLG tasks: Summarization, Story Completion, Question Answering, and Translation. Our comprehensive benchmarking of five major LLM families provides critical insight into their strengths and limitations as NLG evaluators. Our dataset is available at https://huggingface.co/datasets/YCWANGVINCE/DHP_Benchmark.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs as NLG evaluators
Developing DHP benchmarking framework
Measuring LLM discernment in NLG tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical Perturbation framework
Quantitative discernment scores
Systematic NLG evaluation measurement
🔎 Similar Papers
No similar papers found.
Y
Yicheng Wang
Texas A&M University
Jiayi Yuan
Jiayi Yuan
Rice University
Machine LearningLarge Language Models
Yu-Neng Chuang
Yu-Neng Chuang
Rice University
Large Language ModelTrustworthy AIExplainable Artificial IntelligenceRecommender System
Zhuoer Wang
Zhuoer Wang
Texas A&M University
Natural Language ProcessingArtificial Intelligence
Y
Yingchi Liu
Axon Enterprise, Inc.
M
Mark Cusick
Axon Enterprise, Inc.
P
Param Kulkarni
Axon Enterprise, Inc.
Z
Zhengping Ji
Axon Enterprise, Inc.
Y
Yasser Ibrahim
Axon Enterprise, Inc.
Xia Hu
Xia Hu
Google DeepMind
Deep LearningMachine LearningMultimodal