Integrative Decoding: Improve Factuality via Implicit Self-consistency

📅 2024-10-02
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the factual inconsistency of large language models (LLMs) in open-ended generation tasks, this paper proposes Integrated Decoding (ID), a novel decoding paradigm that integrates self-consistency implicitly into standard autoregressive generation. At each decoding step, ID parallelly samples multiple prefix-conditioned paths and performs token-level weighted aggregation over their predictions—effectively modeling consensus without explicit voting or re-ranking. Crucially, ID is the first method to embed self-consistency seamlessly into word-by-word decoding, requiring no task-specific formatting, post-hoc processing, or model fine-tuning, thus ensuring strong generality and scalability. Empirical evaluation on TruthfulQA, Biographies, and LongFact demonstrates consistent improvements in factual accuracy—+11.2%, +15.4%, and +8.5%, respectively—with gains monotonically increasing as the number of sampled paths grows. ID significantly outperforms existing explicit re-ranking and majority-voting approaches.

Technology Category

Application Category

📝 Abstract
Self-consistency-based approaches, which involve repeatedly sampling multiple outputs and selecting the most consistent one as the final response, prove to be remarkably effective in improving the factual accuracy of large language models. Nonetheless, existing methods usually have strict constraints on the task format, largely limiting their applicability. In this paper, we present Integrative Decoding (ID), to unlock the potential of self-consistency in open-ended generation tasks. ID operates by constructing a set of inputs, each prepended with a previously sampled response, and then processes them concurrently, with the next token being selected by aggregating of all their corresponding predictions at each decoding step. In essence, this simple approach implicitly incorporates self-consistency in the decoding objective. Extensive evaluation shows that ID consistently enhances factuality over a wide range of language models, with substantial improvements on the TruthfulQA (+11.2%), Biographies (+15.4%) and LongFact (+8.5%) benchmarks. The performance gains amplify progressively as the number of sampled responses increases, indicating the potential of ID to scale up with repeated sampling.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Information Accuracy
Open-ended Questions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrated Decoding
Self-consistency
Open-domain Questions
🔎 Similar Papers
No similar papers found.
Y
Yi Cheng
The Hong Kong Polytechnic University
X
Xiao Liang
Tsinghua University
Yeyun Gong
Yeyun Gong
Microsoft Research Asia
Natural Language GenerationQuestion AnsweringPre-training
W
Wen Xiao
Microsoft Azure AI
S
Song Wang
Microsoft Azure AI
Yuji Zhang
Yuji Zhang
Postdoc@University of Illinois at Urbana-Champaign
NLP/AI/MLInterpretabilityTrustworthy AIKnowledge of LMReasoning
Wenjun Hou
Wenjun Hou
The Hong Kong Polytechnic University & Southern University of Science and Technology
Radiology Report GenerationNLPAI Agent
Kaishuai Xu
Kaishuai Xu
The Hong Kong Polytechnic University
LLM ReasoningMedical AI
W
Wenge Liu
The Hong Kong Polytechnic University
W
Wenjie Li
The Hong Kong Polytechnic University
J
Jian Jiao
Microsoft Research
Q
Qi Chen
Microsoft Research
P
Peng Cheng
Microsoft Research
Wayne Xiong
Wayne Xiong
Microsoft Research