DHI: Leveraging Diverse Hallucination Induction for Enhanced Contrastive Factuality Control in Large Language Models

📅 2026-01-03
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of hallucination—where large language models generate inaccurate or fabricated information—by proposing the Diverse Hallucination Induction (DHI) framework. Unlike existing approaches that are limited to specific hallucination types and rely on annotated hallucination data, DHI induces diverse hallucinations without requiring pre-labeled examples by redesigning the loss function and incorporating a causal attention mask. Furthermore, it enhances factual consistency through adaptive rationale-constrained optimization during contrastive decoding. Evaluated across multiple hallucination benchmarks, the proposed method significantly outperforms current contrastive decoding techniques, demonstrating improved factual accuracy and generalization capability.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) frequently produce inaccurate or fabricated information, known as"hallucinations,"which compromises their reliability. Existing approaches often train an"Evil LLM"to deliberately generate hallucinations on curated datasets, using these induced hallucinations to guide contrastive decoding against a reliable"positive model"for hallucination mitigation. However, this strategy is limited by the narrow diversity of hallucinations induced, as Evil LLMs trained on specific error types tend to reproduce only these particular patterns, thereby restricting their overall effectiveness. To address these limitations, we propose DHI (Diverse Hallucination Induction), a novel training framework that enables the Evil LLM to generate a broader range of hallucination types without relying on pre-annotated hallucination data. DHI employs a modified loss function that down-weights the generation of specific factually correct tokens, encouraging the Evil LLM to produce diverse hallucinations at targeted positions while maintaining overall factual content. Additionally, we introduce a causal attention masking adaptation to reduce the impact of this penalization on the generation of subsequent tokens. During inference, we apply an adaptive rationality constraint that restricts contrastive decoding to tokens where the positive model exhibits high confidence, thereby avoiding unnecessary penalties on factually correct tokens. Extensive empirical results show that DHI achieves significant performance gains over other contrastive decoding-based approaches across multiple hallucination benchmarks.
Problem

Research questions and friction points this paper is trying to address.

hallucination
large language models
contrastive decoding
factuality control
diverse hallucination induction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diverse Hallucination Induction
Contrastive Decoding
Hallucination Mitigation
Adaptive Rationality Constraint
Causal Attention Masking
🔎 Similar Papers
No similar papers found.
J
Jiani Guo
School of Computer Science, Wuhan University, Wuhan, China
X
Xiangke Zeng
School of Computer Science, Wuhan University, Wuhan, China
Jie Wu
Jie Wu
SIGS, Tsinghua University
Code Generation
Zuchao Li
Zuchao Li
Wuhan University
Natural Language ProcessingMachine Learning