DynHD: Hallucination Detection for Diffusion Large Language Models via Denoising Dynamics Deviation Learning

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of hallucination in diffusion-based large language models (D-LLMs), which undermines their reliability during generation. The authors propose a novel hallucination detection paradigm that jointly models token-level semantic information density imbalance and the dynamic evolution of uncertainty throughout the denoising process. Specifically, a semantic-aware evidence construction module identifies critical tokens, while a reference evidence generator learns the trajectory of uncertainty. Hallucinations are detected based on deviations in the denoising dynamics. Evaluated across multiple benchmarks and backbone architectures, the proposed method significantly outperforms current state-of-the-art approaches, achieving superior detection accuracy and computational efficiency.

Technology Category

Application Category

📝 Abstract
Diffusion large language models (D-LLMs) have emerged as a promising alternative to auto-regressive models due to their iterative refinement capabilities. However, hallucinations remain a critical issue that hinders their reliability. To detect hallucination responses from model outputs, token-level uncertainty (e.g., entropy) has been widely used as an effective signal to indicate potential factual errors. Nevertheless, the fixed-length generation paradigm of D-LLMs implies that tokens contribute unevenly to hallucination detection, with only a small subset providing meaningful signals. Moreover, the evolution trend of uncertainty throughout the diffusion process can also provide important signals, highlighting the necessity of modeling its denoising dynamics for hallucination detection. In this paper, we propose DynHD that bridge these gaps from both spatial (token sequence) and temporal (denoising dynamics) perspectives. To address the information density imbalance across tokens, we propose a semantic-aware evidence construction module that extracts hallucination-indicative signals by filtering out non-informative tokens and emphasizing semantically meaningful ones. To model denoising dynamics for hallucination detection, we introduce a reference evidence generator that learns the expected evolution trajectory of uncertainty evidence, along with a deviation-based hallucination detector that makes predictions by measuring the discrepancy between the observed and reference trajectories. Extensive experiments demonstrate that DynHD consistently outperforms state-of-the-art baselines while achieving higher efficiency across multiple benchmarks and backbone models.
Problem

Research questions and friction points this paper is trying to address.

hallucination detection
diffusion large language models
denoising dynamics
token-level uncertainty
factual errors
Innovation

Methods, ideas, or system contributions that make the work stand out.

hallucination detection
diffusion language models
denoising dynamics
uncertainty evolution
semantic-aware evidence
🔎 Similar Papers
No similar papers found.