๐ค AI Summary
Current large language models struggle with factual accuracy in multi-source summarization, are vulnerable to adversarial content, and lack effective incentives for source honesty. This paper proposes the Truthful Text Summarization (TTS) framework: first, decomposing synthetic texts into atomic propositions and extracting source stances; second, assessing source consistency via an improved multi-task peer-prediction mechanism to dynamically filter unreliable sources; and third, regenerating the summary. TTS is the first framework to achieve source incentive alignment without ground-truth labelsโensuring honest reporting constitutes a Nash equilibrium strategy. Experiments demonstrate that TTS significantly improves factual accuracy and adversarial robustness while preserving linguistic fluency, thereby enabling positive synergy between information credibility and visibility.
๐ Abstract
Large language models (LLMs) are increasingly used in modern search and answer systems to synthesize multiple, sometimes conflicting, texts into a single response, yet current pipelines offer weak incentives for sources to be accurate and are vulnerable to adversarial content. We introduce Truthful Text Summarization (TTS), an incentive-aligned framework that improves factual robustness without ground-truth labels. TTS (i) decomposes a draft synthesis into atomic claims, (ii) elicits each source's stance on every claim, (iii) scores sources with an adapted multi-task peer-prediction mechanism that rewards informative agreement, and (iv) filters unreliable sources before re-summarizing. We establish formal guarantees that align a source's incentives with informative honesty, making truthful reporting the utility-maximizing strategy. Experiments show that TTS improves factual accuracy and robustness while preserving fluency, aligning exposure with informative corroboration and disincentivizing manipulation.