🤖 AI Summary
This work addresses the challenge of hallucination detection in large language models (LLMs). We propose an unsupervised uncertainty quantification method based on the effective rank of hidden states, leveraging spectral analysis of multi-layer, multi-output internal representations. The method jointly models intra-response and inter-response uncertainty—without requiring external knowledge, auxiliary modules, or model fine-tuning. Its core innovation lies in exploiting the intrinsic low-rank structure of LLM intermediate representations to enable mechanistically grounded and interpretable hallucination identification. Extensive experiments across diverse LLMs, tasks (e.g., question answering, fact verification), and benchmarks (e.g., TruthfulQA, HALO) demonstrate strong hallucination detection performance and robust generalization. Our approach significantly enhances the reliability and theoretical interpretability of LLM truthfulness assessment, offering a principled, architecture-agnostic framework for uncertainty-aware LLM evaluation.
📝 Abstract
Detecting hallucinations in large language models (LLMs) remains a fundamental challenge for their trustworthy deployment. Going beyond basic uncertainty-driven hallucination detection frameworks, we propose a simple yet powerful method that quantifies uncertainty by measuring the effective rank of hidden states derived from multiple model outputs and different layers. Grounded in the spectral analysis of representations, our approach provides interpretable insights into the model's internal reasoning process through semantic variations, while requiring no extra knowledge or additional modules, thus offering a combination of theoretical elegance and practical efficiency. Meanwhile, we theoretically demonstrate the necessity of quantifying uncertainty both internally (representations of a single response) and externally (different responses), providing a justification for using representations among different layers and responses from LLMs to detect hallucinations. Extensive experiments demonstrate that our method effectively detects hallucinations and generalizes robustly across various scenarios, contributing to a new paradigm of hallucination detection for LLM truthfulness.