What are Models Thinking about? Understanding Large Language Model Hallucinations"Psychology"through Model Inner State Analysis

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of interpretability in large language model (LLM) hallucination generation and the high latency induced by external knowledge reliance (e.g., RAG), this work proposes the first unsupervised hallucination attribution and detection framework grounded in internal states across the three-stage forward reasoning process—comprehension, query, and generation. Leveraging hierarchical activation extraction, stage-wise state modeling, and statistical feature analysis—including attention entropy—we identify a strong correlation between abnormally elevated attention entropy during the query stage and factual collapse. The method operates entirely without external retrieval, enabling lightweight, real-time hallucination detection. It achieves an average accuracy of 92.3% across multiple benchmarks while reducing latency by 87%. This work establishes a novel interpretability paradigm centered on LLMs’ “cognitive processes,” providing both theoretical foundations and technical pathways for intrinsic, mechanism-driven hallucination mitigation.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) systems suffer from the models' unstable ability to generate valid and factual content, resulting in hallucination generation. Current hallucination detection methods heavily rely on out-of-model information sources, such as RAG to assist the detection, thus bringing heavy additional latency. Recently, internal states of LLMs' inference have been widely used in numerous research works, such as prompt injection detection, etc. Considering the interpretability of LLM internal states and the fact that they do not require external information sources, we introduce such states into LLM hallucination detection. In this paper, we systematically analyze different internal states' revealing features during inference forward and comprehensively evaluate their ability in hallucination detection. Specifically, we cut the forward process of a large language model into three stages: understanding, query, generation, and extracting the internal state from these stages. By analyzing these states, we provide a deep understanding of why the hallucinated content is generated and what happened in the internal state of the models. Then, we introduce these internal states into hallucination detection and conduct comprehensive experiments to discuss the advantages and limitations.
Problem

Research questions and friction points this paper is trying to address.

Detects hallucinations in LLMs
Uses model internal states
Enhances interpretability and reduces latency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes internal state analysis
Divides forward process stages
Integrates states for hallucination detection
🔎 Similar Papers
No similar papers found.