A Unified Retrieval Framework with Document Ranking and EDU Filtering for Multi-document Summarization

📅 2025-04-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing retrieval methods for multi-document summarization (MDS) rely on hand-crafted queries and coarse-grained, document-level truncation to address Transformer input-length constraints, resulting in low relevance and poor generalizability. To overcome these limitations, this paper proposes the first end-to-end unified retrieval framework that jointly models implicit query generation—automatically derived from salient Elementary Discourse Units (EDUs)—document ranking, and fine-grained EDU-level filtering. Our approach integrates retrieval and ranking within a single Transformer architecture, employs attention-driven relevance scoring, and applies adaptive EDU filtering under contextual constraints to achieve semantically precise compression. Extensive experiments on multiple MDS benchmarks demonstrate significant ROUGE improvements over strong baselines. The framework exhibits strong cross-architecture scalability, robustness to input variation, and high accuracy in dynamic query selection and ranking.

Technology Category

Application Category

📝 Abstract
In the field of multi-document summarization (MDS), transformer-based models have demonstrated remarkable success, yet they suffer an input length limitation. Current methods apply truncation after the retrieval process to fit the context length; however, they heavily depend on manually well-crafted queries, which are impractical to create for each document set for MDS. Additionally, these methods retrieve information at a coarse granularity, leading to the inclusion of irrelevant content. To address these issues, we propose a novel retrieval-based framework that integrates query selection and document ranking and shortening into a unified process. Our approach identifies the most salient elementary discourse units (EDUs) from input documents and utilizes them as latent queries. These queries guide the document ranking by calculating relevance scores. Instead of traditional truncation, our approach filters out irrelevant EDUs to fit the context length, ensuring that only critical information is preserved for summarization. We evaluate our framework on multiple MDS datasets, demonstrating consistent improvements in ROUGE metrics while confirming its scalability and flexibility across diverse model architectures. Additionally, we validate its effectiveness through an in-depth analysis, emphasizing its ability to dynamically select appropriate queries and accurately rank documents based on their relevance scores. These results demonstrate that our framework effectively addresses context-length constraints, establishing it as a robust and reliable solution for MDS.
Problem

Research questions and friction points this paper is trying to address.

Overcoming input length limitation in transformer-based MDS models
Reducing dependency on manually crafted queries for document retrieval
Improving relevance by filtering coarse-grained irrelevant content
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates query selection and document ranking
Filters irrelevant EDUs to fit context length
Uses latent queries from salient EDUs
🔎 Similar Papers
No similar papers found.
Shiyin Tan
Shiyin Tan
Tokyo Institute of Technology
machine learninggraph neural networkNatural Language ProcessingLarge Language Model
J
Jaeeon Park
Institute of Science Tokyo, Tokyo, Japan
D
Dongyuan Li
The University of Tokyo, Center for Spatial Information Science, Tokyo, Japan; Institute of Science Tokyo, Tokyo, Japan
Renhe Jiang
Renhe Jiang
The University of Tokyo
AISpatio-temporal Data MiningHuman MobilityGraph LearningTime Series Forecasting
M
Manabu Okumura
Institute of Science Tokyo, Tokyo, Japan