InFact: Informativeness Alignment for Improved LLM Factuality

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently generate factually correct yet information-poor outputs (e.g., “Obama was born in the United States” instead of “Obama was born in Honolulu, Hawaii”), resulting in insufficient factual completeness. To address this, we propose an **informativeness alignment mechanism**, the first to explicitly model informational richness as an optimizable objective—jointly enhancing both factual correctness and descriptive granularity. Our method constructs an informativeness scoring function grounded in established factual evaluation benchmarks and integrates it into end-to-end alignment training via reinforcement learning and preference optimization. Experiments demonstrate significant and simultaneous improvements in both factual accuracy and information completeness across multiple factual evaluation benchmarks, without requiring additional human annotations. These results reveal a positive coupling between factual correctness and informational richness—suggesting that optimizing for one inherently benefits the other.

Technology Category

Application Category

📝 Abstract
Factual completeness is a general term that captures how detailed and informative a factually correct text is. For instance, the factual sentence ``Barack Obama was born in the United States'' is factually correct, though less informative than the factual sentence ``Barack Obama was born in Honolulu, Hawaii, United States''. Despite the known fact that LLMs tend to hallucinate and generate factually incorrect text, they might also tend to choose to generate factual text that is indeed factually correct and yet less informative than other, more informative choices. In this work, we tackle this problem by proposing an informativeness alignment mechanism. This mechanism takes advantage of recent factual benchmarks to propose an informativeness alignment objective. This objective prioritizes answers that are both correct and informative. A key finding of our work is that when training a model to maximize this objective or optimize its preference, we can improve not just informativeness but also factuality.
Problem

Research questions and friction points this paper is trying to address.

LLMs generate factually correct but less informative text
Need alignment mechanism for correct and informative answers
Improving informativeness can also enhance overall factuality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Informativeness alignment mechanism for LLMs
Leverages factual benchmarks for alignment
Optimizes preference for correct, informative answers
🔎 Similar Papers
No similar papers found.