π€ AI Summary
Hallucinations in large language models (LLMs) severely undermine their reliability. Existing hidden-state-based detection methods predominantly rely on static, layer-wise isolated representations, neglecting the dynamic cross-layer evolution of internal states. To address this, we propose a novel paradigm that explicitly models the inter-layer update process of hidden states within the residual stream. We introduce the Information Contribution Ratio (ICR Score), the first metric to quantify each layerβs contribution to the final output, enabling interpretable, cross-layer dynamic modeling. Furthermore, we design a lightweight ICR Probeβa plug-and-play diagnostic module that performs efficient hallucination detection without accessing training data or gradients. Extensive experiments demonstrate that our method achieves significantly higher detection accuracy and robustness with substantially fewer parameters, outperforming state-of-the-art approaches across multiple benchmarks. It further exhibits strong interpretability and generalization across architectures and tasks.
π Abstract
Large language models (LLMs) excel at various natural language processing tasks, but their tendency to generate hallucinations undermines their reliability. Existing hallucination detection methods leveraging hidden states predominantly focus on static and isolated representations, overlooking their dynamic evolution across layers, which limits efficacy. To address this limitation, we shift the focus to the hidden state update process and introduce a novel metric, the ICR Score (Information Contribution to Residual Stream), which quantifies the contribution of modules to the hidden states' update. We empirically validate that the ICR Score is effective and reliable in distinguishing hallucinations. Building on these insights, we propose a hallucination detection method, the ICR Probe, which captures the cross-layer evolution of hidden states. Experimental results show that the ICR Probe achieves superior performance with significantly fewer parameters. Furthermore, ablation studies and case analyses offer deeper insights into the underlying mechanism of this method, improving its interpretability.