HIDE and Seek: Detecting Hallucinations in Language Models via Decoupled Representations

📅 2025-06-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models frequently generate “hallucinations”—outputs inconsistent with input prompts or factually incorrect—severely undermining their reliability. To address this, we propose a training-free, single-pass inference method for hallucination detection. Our approach introduces the Hilbert–Schmidt Independence Criterion (HSIC) to quantify statistical dependence between contextual and generated representations within model hidden layers, thereby measuring semantic decoupling as an indicator of factual and faithful errors. This yields a lightweight, low-latency, one-pass detection framework. Evaluated on four question-answering datasets, our method achieves an average 29% improvement in AUC-ROC over the best existing single-pass baseline, matching the performance of multi-pass methods while reducing computational overhead by 51%.

Technology Category

Application Category

📝 Abstract
Contemporary Language Models (LMs), while impressively fluent, often generate content that is factually incorrect or unfaithful to the input context - a critical issue commonly referred to as 'hallucination'. This tendency of LMs to generate hallucinated content undermines their reliability, especially because these fabrications are often highly convincing and therefore difficult to detect. While several existing methods attempt to detect hallucinations, most rely on analyzing multiple generations per input, leading to increased computational cost and latency. To address this, we propose a single-pass, training-free approach for effective Hallucination detectIon via Decoupled rEpresentations (HIDE). Our approach leverages the hypothesis that hallucinations result from a statistical decoupling between an LM's internal representations of input context and its generated output. We quantify this decoupling using the Hilbert-Schmidt Independence Criterion (HSIC) applied to hidden-state representations extracted while generating the output sequence. We conduct extensive experiments on four diverse question answering datasets, evaluating both faithfulness and factuality hallucinations across six open-source LMs of varying scales and properties. Our results demonstrate that HIDE outperforms other single-pass methods in almost all settings, achieving an average relative improvement of ~29% in AUC-ROC over the best-performing single-pass strategy across various models and datasets. Additionally, HIDE shows competitive and often superior performance with multi-pass state-of-the-art methods, obtaining an average relative improvement of ~3% in AUC-ROC while consuming ~51% less computation time. Our findings highlight the effectiveness of exploiting internal representation decoupling in LMs for efficient and practical hallucination detection.
Problem

Research questions and friction points this paper is trying to address.

Detect hallucinations in Language Models efficiently
Reduce computational cost of hallucination detection
Improve accuracy of single-pass detection methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decoupled representations detect hallucinations
Single-pass training-free HSIC method
Internal representation analysis reduces computation
🔎 Similar Papers
No similar papers found.