Detecting Contextual Hallucinations in LLMs with Frequency-Aware Attention

📅 2026-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of hallucination in large language models during context-aware generation by proposing a lightweight detection method grounded in frequency-domain analysis. It introduces, for the first time, a signal processing perspective into attention mechanisms, modeling attention distributions as discrete signals and revealing a strong correlation between high-frequency attention energy and hallucinatory behavior—thereby overcoming the limitations of conventional coarse-grained attention analyses. The resulting detector consistently outperforms existing approaches based on verification, internal representations, or standard attention metrics across diverse models and tasks, achieving state-of-the-art performance on the RAGTruth and HalluRAG benchmarks.

Technology Category

Application Category

📝 Abstract
Hallucination detection is critical for ensuring the reliability of large language models (LLMs) in context-based generation. Prior work has explored intrinsic signals available during generation, among which attention offers a direct view of grounding behavior. However, existing approaches typically rely on coarse summaries that fail to capture fine-grained instabilities in attention. Inspired by signal processing, we introduce a frequency-aware perspective on attention by analyzing its variation during generation. We model attention distributions as discrete signals and extract high-frequency components that reflect rapid local changes in attention. Our analysis reveals that hallucinated tokens are associated with high-frequency attention energy, reflecting fragmented and unstable grounding behavior. Based on this insight, we develop a lightweight hallucination detector using high-frequency attention features. Experiments on the RAGTruth and HalluRAG benchmarks show that our approach achieves performance gains over verification-based, internal-representation-based, and attention-based methods across models and tasks.
Problem

Research questions and friction points this paper is trying to address.

hallucination detection
large language models
contextual hallucinations
attention mechanism
RAG
Innovation

Methods, ideas, or system contributions that make the work stand out.

frequency-aware attention
hallucination detection
attention signal analysis
large language models
high-frequency components