🤖 AI Summary
This work addresses hallucination in large language models (LLMs), revealing its root cause as a progressive distributional shift—from factual to distorted representations—in the latent space. We propose the first hallucination modeling framework that jointly characterizes layer-wise hidden-state trajectory deviations and token-level probability flow distortions. Our method introduces a fully unsupervised, fine-tuning-free, and annotation-free hallucination detection framework: it analyzes intermediate Transformer layer state trajectories, tracks token-level probability entropy evolution, and quantifies inter-layer distributional drift via Wasserstein distance. Evaluated on FactScore and TruthfulQA benchmarks, our approach achieves a 12.7% absolute F1 improvement over prior unsupervised methods. It demonstrates strong generalization across diverse LLMs and architectures while maintaining full compatibility with real-time inference—requiring no architectural modification or additional latency overhead.
📝 Abstract
Large Language Models (LLMs) have recently garnered widespread attention due to their adeptness at generating innovative responses to the given prompts across a multitude of domains. However, LLMs often suffer from the inherent limitation of hallucinations and generate incorrect information while maintaining well-structured and coherent responses. In this work, we hypothesize that hallucinations stem from the internal dynamics of LLMs. Our observations indicate that, during passage generation, LLMs tend to deviate from factual accuracy in subtle parts of responses, eventually shifting toward misinformation. This phenomenon bears a resemblance to human cognition, where individuals may hallucinate while maintaining logical coherence, embedding uncertainty within minor segments of their speech. To investigate this further, we introduce an innovative approach, HalluShift, designed to analyze the distribution shifts in the internal state space and token probabilities of the LLM-generated responses. Our method attains superior performance compared to existing baselines across various benchmark datasets. Our codebase is available at https://github.com/sharanya-dasgupta001/hallushift.