๐ค AI Summary
To address hallucinations in Retrieval-Augmented Generation (RAG) models caused by entanglement between parametric knowledge and externally retrieved knowledge, this paper conducts mechanistic interpretability analysis on the residual streamโrevealing for the first time that hallucinations stem from excessive reliance of feed-forward networks (FFNs) on internal parametric knowledge and failure of copy heads to effectively integrate external content. Building on this insight, we propose a Knowledge Utilization Decoupling Detection paradigm and the Adaptive Activation Rescaling and Filtering (AARF) hallucination mitigation mechanism. Our approach models functional decoupling between FFNs and copy heads via targeted residual stream interventions and introduces an interpretable, lightweight hallucination detector. Evaluated across multiple benchmarks, our detector achieves an average 12.7% improvement in hallucination detection accuracy, while the AARF module reduces hallucination rates by up to 38.5%. Crucially, both components require no fine-tuning or additional training.
๐ Abstract
Retrieval-Augmented Generation (RAG) models are designed to incorporate external knowledge, reducing hallucinations caused by insufficient parametric (internal) knowledge. However, even with accurate and relevant retrieved content, RAG models can still produce hallucinations by generating outputs that conflict with the retrieved information. Detecting such hallucinations requires disentangling how Large Language Models (LLMs) utilize external and parametric knowledge. Current detection methods often focus on one of these mechanisms or without decoupling their intertwined effects, making accurate detection difficult. In this paper, we investigate the internal mechanisms behind hallucinations in RAG scenarios. We discover hallucinations occur when the Knowledge FFNs in LLMs overemphasize parametric knowledge in the residual stream, while Copying Heads fail to effectively retain or integrate external knowledge from retrieved content. Based on these findings, we propose ReDeEP, a novel method that detects hallucinations by decoupling LLM's utilization of external context and parametric knowledge. Our experiments show that ReDeEP significantly improves RAG hallucination detection accuracy. Additionally, we introduce AARF, which mitigates hallucinations by modulating the contributions of Knowledge FFNs and Copying Heads.