Towards Understanding Text Hallucination of Diffusion Models via Local Generation Bias

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models for text generation suffer from “text hallucination”—producing symbolically correct but semantically incoherent, syntactically and logically ill-formed sequences. Method: We identify the root cause as the denoising network’s over-reliance on locally strong correlations, undermining global syntactic modeling. Using score-based diffusion formalism, experimental probing, and theoretical training dynamics analysis—including parity learning analysis of a two-layer MLP on the hypercube—we systematically characterize this bias. Contribution/Results: We empirically validate that this local generation bias is pervasive across architectures (MLP, Transformer) and theoretically prove it induces collapse of global dependency modeling. This yields the first architecture-agnostic, mechanistic account of local generation bias, establishing its causal link to text hallucination. We further generalize the framework to unify explanations of hallucination across modalities, providing novel design principles for hallucination mitigation.

Technology Category

Application Category

📝 Abstract
Score-based diffusion models have achieved incredible performance in generating realistic images, audio, and video data. While these models produce high-quality samples with impressive details, they often introduce unrealistic artifacts, such as distorted fingers or hallucinated texts with no meaning. This paper focuses on textual hallucinations, where diffusion models correctly generate individual symbols but assemble them in a nonsensical manner. Through experimental probing, we consistently observe that such phenomenon is attributed it to the network's local generation bias. Denoising networks tend to produce outputs that rely heavily on highly correlated local regions, particularly when different dimensions of the data distribution are nearly pairwise independent. This behavior leads to a generation process that decomposes the global distribution into separate, independent distributions for each symbol, ultimately failing to capture the global structure, including underlying grammar. Intriguingly, this bias persists across various denoising network architectures including MLP and transformers which have the structure to model global dependency. These findings also provide insights into understanding other types of hallucinations, extending beyond text, as a result of implicit biases in the denoising models. Additionally, we theoretically analyze the training dynamics for a specific case involving a two-layer MLP learning parity points on a hypercube, offering an explanation of its underlying mechanism.
Problem

Research questions and friction points this paper is trying to address.

Explains text hallucination in diffusion models
Identifies local generation bias as the cause
Analyzes training dynamics in denoising networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes local generation bias in diffusion models
Explores text hallucination via experimental probing
Theoretically examines training dynamics in MLP
🔎 Similar Papers
No similar papers found.
R
Rui Lu
Department of Automation, Tsinghua University
Runzhe Wang
Runzhe Wang
Princeton University
Kaifeng Lyu
Kaifeng Lyu
Tsinghua University
X
Xitai Jiang
Qiuzhen College, Tsinghua University
G
Gao Huang
Department of Automation, Tsinghua University
M
Mengdi Wang
Electrical and Computer Engineering, Princeton University