Evaluation Should Not Ignore Variation: On the Impact of Reference Set Choice on Summarization Metrics

📅 2025-06-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a critical limitation in reference-based automatic evaluation metrics (e.g., ROUGE): their stability and reliability are highly sensitive to reference summary selection. Current practices—ignoring linguistic diversity among references—lead to inconsistent model rankings and weak correlation with human judgments. To address this, we conduct the first systematic quantification of how reference set variation impacts metric performance and propose a novel evaluation paradigm that explicitly incorporates reference set diversity as a core criterion. Through empirical analysis across multiple datasets (SummEval, GUMSum, DUC2004), statistical sensitivity tests, cross-genre human evaluations, and joint assessment of correlation and stability, we demonstrate that n-gram–based metrics exhibit high sensitivity to reference composition, often yielding rank reversals. In contrast, integrating reference diversity significantly improves evaluation consistency and alignment with human judgments. Our findings establish a more robust, human-aligned benchmark framework for summarization evaluation in the LLM era.

Technology Category

Application Category

📝 Abstract
Human language production exhibits remarkable richness and variation, reflecting diverse communication styles and intents. However, this variation is often overlooked in summarization evaluation. While having multiple reference summaries is known to improve correlation with human judgments, the impact of using different reference sets on reference-based metrics has not been systematically investigated. This work examines the sensitivity of widely used reference-based metrics in relation to the choice of reference sets, analyzing three diverse multi-reference summarization datasets: SummEval, GUMSum, and DUC2004. We demonstrate that many popular metrics exhibit significant instability. This instability is particularly concerning for n-gram-based metrics like ROUGE, where model rankings vary depending on the reference sets, undermining the reliability of model comparisons. We also collect human judgments on LLM outputs for genre-diverse data and examine their correlation with metrics to supplement existing findings beyond newswire summaries, finding weak-to-no correlation. Taken together, we recommend incorporating reference set variation into summarization evaluation to enhance consistency alongside correlation with human judgments, especially when evaluating LLMs.
Problem

Research questions and friction points this paper is trying to address.

Impact of reference set choice on summarization metrics
Instability of popular metrics across different reference sets
Weak correlation between metrics and human judgments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes impact of reference sets on metrics
Examines metric instability across diverse datasets
Recommends incorporating reference variation for consistency
🔎 Similar Papers
No similar papers found.
Silvia Casola
Silvia Casola
LMU
Natural Language ProcessingMachine learning
Y
Yang Janet Liu
MaiNLP, Center for Information and Language Processing, LMU Munich, Germany; Munich Center for Machine Learning (MCML), Munich, Germany
S
Siyao Peng
MaiNLP, Center for Information and Language Processing, LMU Munich, Germany; Munich Center for Machine Learning (MCML), Munich, Germany
O
Oliver Kraus
MaiNLP, Center for Information and Language Processing, LMU Munich, Germany
Albert Gatt
Albert Gatt
Professor of Natural Language Generation, Utrecht University
Computational LinguisticsNatural Language GenerationVision and LanguageLanguage Production
Barbara Plank
Barbara Plank
Professor, LMU Munich, Visiting Prof ITU Copenhagen
Natural Language ProcessingComputational LinguisticsMachine LearningTransfer Learning