🤖 AI Summary
This work proposes a novel thermodynamic perspective for detecting hallucinations in large language models, which often generate highly confident yet factually incorrect statements that evade conventional uncertainty metrics. Treating factual claims as stable attractors on the generative manifold and hallucinations as unstable states, the method leverages a discrete text diffusion model to perturb and reconstruct input claims. A natural language inference (NLI) discriminator is introduced to compute semantic energy, quantifying deep semantic inconsistencies between original and reconstructed statements. This approach uniquely integrates thermodynamic stability into hallucination detection by jointly calibrating generative stability and discriminative confidence. Evaluated without supervision, it achieves an AUROC of 0.725 on the FEVER dataset, outperforming baselines by 1.5%, and demonstrates over 4% zero-shot improvement on the multi-hop HOVER benchmark, confirming robustness to distributional shifts.
📝 Abstract
Large Language Models (LLMs) frequently hallucinate plausible but incorrect assertions, a vulnerability often missed by uncertainty metrics when models are confidently wrong. We propose DiffuTruth, an unsupervised framework that reconceptualizes fact verification via non equilibrium thermodynamics, positing that factual truths act as stable attractors on a generative manifold while hallucinations are unstable. We introduce the Generative Stress Test, claims are corrupted with noise and reconstructed using a discrete text diffusion model. We define Semantic Energy, a metric measuring the semantic divergence between the original claim and its reconstruction using an NLI critic. Unlike vector space errors, Semantic Energy isolates deep factual contradictions. We further propose a Hybrid Calibration fusing this stability signal with discriminative confidence. Extensive experiments on FEVER demonstrate DiffuTruth achieves a state of the art unsupervised AUROC of 0.725, outperforming baselines by 1.5 percent through the correction of overconfident predictions. Furthermore, we show superior zero shot generalization on the multi hop HOVER dataset, outperforming baselines by over 4 percent, confirming the robustness of thermodynamic truth properties to distribution shifts.