🤖 AI Summary
This work investigates the failure mechanisms of multi-document summarization (MDS) models under zero-shot cross-domain transfer (news → scientific → conversational domains). We systematically evaluate four dominant paradigms—end-to-end pretraining, chunked summarization, extractive-then-generative, and GPT-style reasoning—using both human judgment and automated metrics. For the first time, we unify and quantify domain-transfer failure across three dimensions: reference similarity, summary quality, and factual consistency. Our analysis reveals that failure stems primarily from semantic structural mismatch between training paradigms and target-domain discourse properties—not merely from distributional shift. Moreover, standard automatic metrics (e.g., ROUGE, BERTScore) exhibit significant miscalibration in cross-domain settings. Based on these findings, we propose principled metric calibration strategies and introduce the first cross-domain benchmark specifically designed for robustness evaluation of MDS models. This benchmark provides empirical grounding and methodological guidance for developing generalizable, domain-agnostic summarization systems.
📝 Abstract
Abstractive multi-document summarization (MDS) is the task of automatically summarizing information in multiple documents, from news articles to conversations with multiple speakers. The training approaches for current MDS models can be grouped into four approaches: end-to-end with special pre-training ("direct"), chunk-then-summarize, extract-then-summarize, and inference with GPT-style models. In this work, we evaluate MDS models across training approaches, domains, and dimensions (reference similarity, quality, and factuality), to analyze how and why models trained on one domain can fail to summarize documents from another (News, Science, and Conversation) in the zero-shot domain transfer setting. We define domain-transfer"failure"as a decrease in factuality, higher deviation from the target, and a general decrease in summary quality. In addition to exploring domain transfer for MDS models, we examine potential issues with applying popular summarization metrics out-of-the-box.