Principled Content Selection to Generate Diverse and Personalized Multi-Document Summaries

πŸ“… 2025-05-28
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the β€œlost-in-the-middle” problem in large language models (LLMs), which leads to uneven source coverage in multi-document summarization (MDS), this paper proposes a three-stage framework: (1) atomic key-point extraction; (2) diversity-aware content selection via Determinantal Point Processes (DPPs)β€”the first application of DPPs to MDS, enabling joint optimization of coverage and diversity; and (3) intent-aware rewriting, leveraging plug-and-play user-intent kernel functions for controllable personalization. The method integrates prompt engineering, DPP-based modeling, and intent-driven LLM rewriting. Evaluated on the DiverseSumm benchmark, it significantly improves source coverage while maintaining strong cross-LLM robustness. Results demonstrate consistent gains in summary relevance, content diversity, and alignment with user-specific intents, establishing a new state-of-the-art in controllable, coverage-aware MDS.

Technology Category

Application Category

πŸ“ Abstract
While large language models (LLMs) are increasingly capable of handling longer contexts, recent work has demonstrated that they exhibit the"lost in the middle"phenomenon (Liu et al., 2024) of unevenly attending to different parts of the provided context. This hinders their ability to cover diverse source material in multi-document summarization, as noted in the DiverseSumm benchmark (Huang et al., 2024). In this work, we contend that principled content selection is a simple way to increase source coverage on this task. As opposed to prompting an LLM to perform the summarization in a single step, we explicitly divide the task into three steps -- (1) reducing document collections to atomic key points, (2) using determinantal point processes (DPP) to perform select key points that prioritize diverse content, and (3) rewriting to the final summary. By combining prompting steps, for extraction and rewriting, with principled techniques, for content selection, we consistently improve source coverage on the DiverseSumm benchmark across various LLMs. Finally, we also show that by incorporating relevance to a provided user intent into the DPP kernel, we can generate personalized summaries that cover relevant source information while retaining coverage.
Problem

Research questions and friction points this paper is trying to address.

Addresses uneven attention in LLMs for multi-document summarization
Improves source coverage via principled content selection steps
Generates personalized summaries by incorporating user intent
Innovation

Methods, ideas, or system contributions that make the work stand out.

Atomic key points extraction from documents
DPP for diverse content selection
User intent integration for personalization
πŸ”Ž Similar Papers
No similar papers found.