Neural Message-Passing on Attention Graphs for Hallucination Detection

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing hallucination detection methods for large language models (LLMs) rely on isolated heuristics or shallow signals, resulting in poor generalization. Method: This paper proposes the first unified framework that formulates hallucination detection as a graph learning task. It constructs an attributed graph integrating attention weights and hidden-layer activations, then applies graph neural networks (GNNs) to perform multi-signal collaborative message passing over the attention flow for end-to-end hallucination identification. Theoretically, the proposed graph structure subsumes mainstream attention-based heuristics. Contribution/Results: Experiments demonstrate significant improvements over state-of-the-art detectors across multiple benchmarks. Moreover, the method exhibits strong zero-shot cross-dataset transferability, validating the effectiveness and generalization advantage of graph-structured modeling combined with multi-source neural signal fusion.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) often generate incorrect or unsupported content, known as hallucinations. Existing detection methods rely on heuristics or simple models over isolated computational traces such as activations, or attention maps. We unify these signals by representing them as attributed graphs, where tokens are nodes, edges follow attentional flows, and both carry features from attention scores and activations. Our approach, CHARM, casts hallucination detection as a graph learning task and tackles it by applying GNNs over the above attributed graphs. We show that CHARM provably subsumes prior attention-based heuristics and, experimentally, it consistently outperforms other leading approaches across diverse benchmarks. Our results shed light on the relevant role played by the graph structure and on the benefits of combining computational traces, whilst showing CHARM exhibits promising zero-shot performance on cross-dataset transfer.
Problem

Research questions and friction points this paper is trying to address.

Detecting hallucinations in Large Language Models' outputs
Unifying attention maps and activations as attributed graphs
Applying graph neural networks for improved hallucination detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Representing attention traces as attributed graphs
Applying graph neural networks for hallucination detection
Combining attention scores with activation features