🤖 AI Summary
Existing retrieval methods for multi-document summarization (MDS) rely on hand-crafted queries and coarse-grained, document-level truncation to address Transformer input-length constraints, resulting in low relevance and poor generalizability. To overcome these limitations, this paper proposes the first end-to-end unified retrieval framework that jointly models implicit query generation—automatically derived from salient Elementary Discourse Units (EDUs)—document ranking, and fine-grained EDU-level filtering. Our approach integrates retrieval and ranking within a single Transformer architecture, employs attention-driven relevance scoring, and applies adaptive EDU filtering under contextual constraints to achieve semantically precise compression. Extensive experiments on multiple MDS benchmarks demonstrate significant ROUGE improvements over strong baselines. The framework exhibits strong cross-architecture scalability, robustness to input variation, and high accuracy in dynamic query selection and ranking.
📝 Abstract
In the field of multi-document summarization (MDS), transformer-based models have demonstrated remarkable success, yet they suffer an input length limitation. Current methods apply truncation after the retrieval process to fit the context length; however, they heavily depend on manually well-crafted queries, which are impractical to create for each document set for MDS. Additionally, these methods retrieve information at a coarse granularity, leading to the inclusion of irrelevant content. To address these issues, we propose a novel retrieval-based framework that integrates query selection and document ranking and shortening into a unified process. Our approach identifies the most salient elementary discourse units (EDUs) from input documents and utilizes them as latent queries. These queries guide the document ranking by calculating relevance scores. Instead of traditional truncation, our approach filters out irrelevant EDUs to fit the context length, ensuring that only critical information is preserved for summarization. We evaluate our framework on multiple MDS datasets, demonstrating consistent improvements in ROUGE metrics while confirming its scalability and flexibility across diverse model architectures. Additionally, we validate its effectiveness through an in-depth analysis, emphasizing its ability to dynamically select appropriate queries and accurately rank documents based on their relevance scores. These results demonstrate that our framework effectively addresses context-length constraints, establishing it as a robust and reliable solution for MDS.