🤖 AI Summary
Existing hallucination detection methods for large language models (LLMs) rely on isolated heuristics or shallow signals, resulting in poor generalization. Method: This paper proposes the first unified framework that formulates hallucination detection as a graph learning task. It constructs an attributed graph integrating attention weights and hidden-layer activations, then applies graph neural networks (GNNs) to perform multi-signal collaborative message passing over the attention flow for end-to-end hallucination identification. Theoretically, the proposed graph structure subsumes mainstream attention-based heuristics. Contribution/Results: Experiments demonstrate significant improvements over state-of-the-art detectors across multiple benchmarks. Moreover, the method exhibits strong zero-shot cross-dataset transferability, validating the effectiveness and generalization advantage of graph-structured modeling combined with multi-source neural signal fusion.
📝 Abstract
Large Language Models (LLMs) often generate incorrect or unsupported content, known as hallucinations. Existing detection methods rely on heuristics or simple models over isolated computational traces such as activations, or attention maps. We unify these signals by representing them as attributed graphs, where tokens are nodes, edges follow attentional flows, and both carry features from attention scores and activations. Our approach, CHARM, casts hallucination detection as a graph learning task and tackles it by applying GNNs over the above attributed graphs. We show that CHARM provably subsumes prior attention-based heuristics and, experimentally, it consistently outperforms other leading approaches across diverse benchmarks. Our results shed light on the relevant role played by the graph structure and on the benefits of combining computational traces, whilst showing CHARM exhibits promising zero-shot performance on cross-dataset transfer.