🤖 AI Summary
Hallucination in large language model (LLM) generation severely hinders trustworthy deployment. To address this, we propose TOHA, the first hallucination detector that leverages topological discrepancies—quantified via persistent homology distance—between attention-induced subgraphs of prompts and responses. Specifically, TOHA constructs attention subgraphs per layer and head, then measures their topological divergence; we empirically identify that high divergence in certain attention heads strongly correlates with hallucinatory outputs. The method is inherently compatible with retrieval-augmented generation (RAG) pipelines, requiring no fine-tuning or external knowledge bases. Evaluated on multiple question-answering and data-to-text benchmarks, TOHA achieves state-of-the-art performance. We publicly release two high-quality, human-annotated hallucination datasets. Extensive experiments further demonstrate strong generalization across diverse LLMs, unseen datasets, and out-of-domain scenarios.
📝 Abstract
Hallucination, i.e., generating factually incorrect content, remains a critical challenge for large language models (LLMs). We introduce TOHA, a TOpology-based HAllucination detector in the RAG setting, which leverages a topological divergence metric to quantify the structural properties of graphs induced by attention matrices. Examining the topological divergence between prompt and response subgraphs reveals consistent patterns: higher divergence values in specific attention heads correlate with hallucinated outputs, independent of the dataset. Extensive experiments, including evaluation on question answering and data-to-text tasks, show that our approach achieves state-of-the-art or competitive results on several benchmarks, two of which were annotated by us and are being publicly released to facilitate further research. Beyond its strong in-domain performance, TOHA maintains remarkable domain transferability across multiple open-source LLMs. Our findings suggest that analyzing the topological structure of attention matrices can serve as an efficient and robust indicator of factual reliability in LLMs.