🤖 AI Summary
Hallucination—where RAG-generated outputs contradict or exceed retrieved evidence—remains pervasive, and existing detection methods rely heavily on costly annotated data or external LLM-based evaluation, suffering from poor interpretability and scalability.
Method: We introduce RAGLens, the first hallucination detector for RAG that leverages sparse autoencoders (SAEs) from mechanistic interpretability. RAGLens operates internally by analyzing layer-wise LLM activations, integrating information-theoretic feature selection with additive modeling to isolate hallucination-specific neural activation patterns.
Contribution/Results: Evaluated across multiple benchmarks, RAGLens significantly outperforms state-of-the-art detectors in accuracy while remaining lightweight and fully interpretable. It reveals systematic cross-layer distributions of hallucination signals and identifies recurrent neuron groups associated with factual inconsistency, enabling both post-hoc intervention and mechanistic analysis of hallucination generation.
📝 Abstract
Retrieval-Augmented Generation (RAG) improves the factuality of large language models (LLMs) by grounding outputs in retrieved evidence, but faithfulness failures, where generations contradict or extend beyond the provided sources, remain a critical challenge. Existing hallucination detection methods for RAG often rely either on large-scale detector training, which requires substantial annotated data, or on querying external LLM judges, which leads to high inference costs. Although some approaches attempt to leverage internal representations of LLMs for hallucination detection, their accuracy remains limited. Motivated by recent advances in mechanistic interpretability, we employ sparse autoencoders (SAEs) to disentangle internal activations, successfully identifying features that are specifically triggered during RAG hallucinations. Building on a systematic pipeline of information-based feature selection and additive feature modeling, we introduce RAGLens, a lightweight hallucination detector that accurately flags unfaithful RAG outputs using LLM internal representations. RAGLens not only achieves superior detection performance compared to existing methods, but also provides interpretable rationales for its decisions, enabling effective post-hoc mitigation of unfaithful RAG. Finally, we justify our design choices and reveal new insights into the distribution of hallucination-related signals within LLMs. The code is available at https://github.com/Teddy-XiongGZ/RAGLens.