Incentive-Aligned Multi-Source LLM Summaries

๐Ÿ“… 2025-09-29
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Current large language models struggle with factual accuracy in multi-source summarization, are vulnerable to adversarial content, and lack effective incentives for source honesty. This paper proposes the Truthful Text Summarization (TTS) framework: first, decomposing synthetic texts into atomic propositions and extracting source stances; second, assessing source consistency via an improved multi-task peer-prediction mechanism to dynamically filter unreliable sources; and third, regenerating the summary. TTS is the first framework to achieve source incentive alignment without ground-truth labelsโ€”ensuring honest reporting constitutes a Nash equilibrium strategy. Experiments demonstrate that TTS significantly improves factual accuracy and adversarial robustness while preserving linguistic fluency, thereby enabling positive synergy between information credibility and visibility.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) are increasingly used in modern search and answer systems to synthesize multiple, sometimes conflicting, texts into a single response, yet current pipelines offer weak incentives for sources to be accurate and are vulnerable to adversarial content. We introduce Truthful Text Summarization (TTS), an incentive-aligned framework that improves factual robustness without ground-truth labels. TTS (i) decomposes a draft synthesis into atomic claims, (ii) elicits each source's stance on every claim, (iii) scores sources with an adapted multi-task peer-prediction mechanism that rewards informative agreement, and (iv) filters unreliable sources before re-summarizing. We establish formal guarantees that align a source's incentives with informative honesty, making truthful reporting the utility-maximizing strategy. Experiments show that TTS improves factual accuracy and robustness while preserving fluency, aligning exposure with informative corroboration and disincentivizing manipulation.
Problem

Research questions and friction points this paper is trying to address.

Improving factual robustness in LLM summaries without ground-truth labels
Aligning source incentives with informative honesty to maximize utility
Filtering unreliable sources before re-summarizing conflicting text responses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes draft synthesis into atomic claims
Scores sources with multi-task peer-prediction mechanism
Filters unreliable sources before final re-summarizing
๐Ÿ”Ž Similar Papers
No similar papers found.