Scaling Multi-Document Event Summarization: Evaluating Compression vs. Full-Text Approaches

📅 2025-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address severe information loss in hundred-document-scale multi-document event summarization, this work systematically compares compression-based (multi-stage pipeline) and full-text-based (direct long-context modeling) approaches. Leveraging long-context Transformers—including Llama-3.1, Command-R, and Jamba-1.5-Mini—augmented with retrieval enhancement, hierarchical compression, and incremental summarization, we find that full-text modeling combined with retrieval achieves the best overall performance; in contrast, compression methods better preserve local information at intermediate stages but suffer from global context loss and consequent information decay. Building on these insights, we propose a hybrid paradigm that synergistically integrates compression and full-text modeling at critical stages. Empirical evaluation demonstrates that this architecture significantly improves summary coherence, coverage, and factual consistency. The approach offers a scalable, principled solution for large-scale event summarization, advancing the state of the art in handling ultra-long document collections.

Technology Category

Application Category

📝 Abstract
Automatically summarizing large text collections is a valuable tool for document research, with applications in journalism, academic research, legal work, and many other fields. In this work, we contrast two classes of systems for large-scale multi-document summarization (MDS): compression and full-text. Compression-based methods use a multi-stage pipeline and often lead to lossy summaries. Full-text methods promise a lossless summary by relying on recent advances in long-context reasoning. To understand their utility on large-scale MDS, we evaluated them on three datasets, each containing approximately one hundred documents per summary. Our experiments cover a diverse set of long-context transformers (Llama-3.1, Command-R, Jamba-1.5-Mini) and compression methods (retrieval-augmented, hierarchical, incremental). Overall, we find that full-text and retrieval methods perform the best in most settings. With further analysis into the salient information retention patterns, we show that compression-based methods show strong promise at intermediate stages, even outperforming full-context. However, they suffer information loss due to their multi-stage pipeline and lack of global context. Our results highlight the need to develop hybrid approaches that combine compression and full-text approaches for optimal performance on large-scale multi-document summarization.
Problem

Research questions and friction points this paper is trying to address.

Compare compression vs. full-text summarization methods
Evaluate methods on large-scale multi-document datasets
Propose hybrid approaches for optimal summarization performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrasts compression vs. full-text summarization
Evaluates long-context transformers
Proposes hybrid summarization approaches
🔎 Similar Papers
No similar papers found.