Towards Multi-dimensional Evaluation of LLM Summarization across Domains and Languages

📅 2025-05-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing summarization evaluation frameworks suffer from narrow domain coverage, monolingual bias (predominantly English), high annotation costs, and poor inter-annotator agreement. To address these limitations, we propose MSumBench—the first bilingual (Chinese–English), cross-domain, multi-dimensional summarization evaluation benchmark, covering eight specialized domains and mainstream summarization models. We introduce a domain-adaptive evaluation framework with knowledge-enhanced, fine-grained metrics. Moreover, we pioneer a multi-agent debate-based annotation paradigm, significantly improving annotation consistency (Krippendorff’s α increased by 23.6%). We systematically characterize inherent biases in large language model (LLM)-based self-evaluation and conduct a meta-evaluation of the “LLM-as-a-judge” paradigm. All datasets and code are publicly released.

Technology Category

Application Category

📝 Abstract
Evaluation frameworks for text summarization have evolved in terms of both domain coverage and metrics. However, existing benchmarks still lack domain-specific assessment criteria, remain predominantly English-centric, and face challenges with human annotation due to the complexity of reasoning. To address these, we introduce MSumBench, which provides a multi-dimensional, multi-domain evaluation of summarization in English and Chinese. It also incorporates specialized assessment criteria for each domain and leverages a multi-agent debate system to enhance annotation quality. By evaluating eight modern summarization models, we discover distinct performance patterns across domains and languages. We further examine large language models as summary evaluators, analyzing the correlation between their evaluation and summarization capabilities, and uncovering systematic bias in their assessment of self-generated summaries. Our benchmark dataset is publicly available at https://github.com/DISL-Lab/MSumBench.
Problem

Research questions and friction points this paper is trying to address.

Lack of domain-specific criteria in summarization benchmarks
English-centric bias in existing evaluation frameworks
Challenges in human annotation due to reasoning complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces MSumBench for multi-dimensional evaluation
Uses multi-agent debate for better annotations
Evaluates models across domains and languages
🔎 Similar Papers
No similar papers found.