Summarization Metrics for Spanish and Basque: Do Automatic Scores and LLM-Judges Correlate with Humans?

📅 2025-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing automatic summarization evaluation metrics and LLM-as-a-Judge models heavily rely on English, with no empirical validation of their effectiveness in non-English linguistic contexts. Method: We introduce BASSE, the first bilingual (Spanish–Basque) human-annotated summarization evaluation dataset—comprising 2,040 summaries rated on five fine-grained Likert dimensions—and simultaneously release the first large-scale Basque news summarization corpus (22,525 articles). We systematically assess traditional metrics (e.g., ROUGE, BERTScore) and LLM-based judges (GPT-4, Claude, Llama-3) via Spearman and Pearson correlation against human judgments. Results: Closed-source LLMs (ρ = 0.68) significantly outperform task-specific automatic metrics (ρ = 0.52) and open-source LLMs (ρ < 0.3); all metrics exhibit lower correlation in Basque than in Spanish. This work fills a critical gap in multilingual summarization evaluation, establishing the first empirical benchmark and methodological framework for low-resource language assessment.

Technology Category

Application Category

📝 Abstract
Studies on evaluation metrics and LLM-as-a-Judge models for automatic text summarization have largely been focused on English, limiting our understanding of their effectiveness in other languages. Through our new dataset BASSE (BAsque and Spanish Summarization Evaluation), we address this situation by collecting human judgments on 2,040 abstractive summaries in Basque and Spanish, generated either manually or by five LLMs with four different prompts. For each summary, annotators evaluated five criteria on a 5-point Likert scale: coherence, consistency, fluency, relevance, and 5W1H. We use these data to reevaluate traditional automatic metrics used for evaluating summaries, as well as several LLM-as-a-Judge models that show strong performance on this task in English. Our results show that currently proprietary judge LLMs have the highest correlation with human judgments, followed by criteria-specific automatic metrics, while open-sourced judge LLMs perform poorly. We release BASSE and our code publicly, along with the first large-scale Basque summarization dataset containing 22,525 news articles with their subheads.
Problem

Research questions and friction points this paper is trying to address.

Evaluating summarization metrics for Spanish and Basque languages
Assessing correlation between automatic scores, LLM-judges, and human judgments
Addressing lack of large-scale datasets for Basque summarization
Innovation

Methods, ideas, or system contributions that make the work stand out.

New dataset BASSE for Basque and Spanish summarization
Evaluates automatic metrics and LLM-as-a-Judge models
Proprietary judge LLMs correlate best with humans
🔎 Similar Papers
No similar papers found.