Hallucination Detection in LLMs via Topological Divergence on Attention Graphs

📅 2025-04-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Hallucination in large language model (LLM) generation severely hinders trustworthy deployment. To address this, we propose TOHA, the first hallucination detector that leverages topological discrepancies—quantified via persistent homology distance—between attention-induced subgraphs of prompts and responses. Specifically, TOHA constructs attention subgraphs per layer and head, then measures their topological divergence; we empirically identify that high divergence in certain attention heads strongly correlates with hallucinatory outputs. The method is inherently compatible with retrieval-augmented generation (RAG) pipelines, requiring no fine-tuning or external knowledge bases. Evaluated on multiple question-answering and data-to-text benchmarks, TOHA achieves state-of-the-art performance. We publicly release two high-quality, human-annotated hallucination datasets. Extensive experiments further demonstrate strong generalization across diverse LLMs, unseen datasets, and out-of-domain scenarios.

Technology Category

Application Category

📝 Abstract
Hallucination, i.e., generating factually incorrect content, remains a critical challenge for large language models (LLMs). We introduce TOHA, a TOpology-based HAllucination detector in the RAG setting, which leverages a topological divergence metric to quantify the structural properties of graphs induced by attention matrices. Examining the topological divergence between prompt and response subgraphs reveals consistent patterns: higher divergence values in specific attention heads correlate with hallucinated outputs, independent of the dataset. Extensive experiments, including evaluation on question answering and data-to-text tasks, show that our approach achieves state-of-the-art or competitive results on several benchmarks, two of which were annotated by us and are being publicly released to facilitate further research. Beyond its strong in-domain performance, TOHA maintains remarkable domain transferability across multiple open-source LLMs. Our findings suggest that analyzing the topological structure of attention matrices can serve as an efficient and robust indicator of factual reliability in LLMs.
Problem

Research questions and friction points this paper is trying to address.

Detecting hallucination in LLMs via attention graph topology
Quantifying structural divergence in prompt-response attention graphs
Evaluating factual reliability using topological analysis of attention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Topological divergence on attention graphs
Quantify structural properties of attention matrices
Domain transferability across multiple LLMs
🔎 Similar Papers
No similar papers found.
A
Alexandra Bazarova
Skolkovo Institute of Science and Technology
A
Aleksandr Yugay
Skolkovo Institute of Science and Technology
A
Andrey Shulga
Skolkovo Institute of Science and Technology
A
Alina Ermilova
Skolkovo Institute of Science and Technology
A
Andrei Volodichev
Skolkovo Institute of Science and Technology
K
Konstantin Polev
Sber AI Lab
J
Julia Belikova
Sber AI Lab
R
Rauf Parchiev
Sber AI Lab
Dmitry Simakov
Dmitry Simakov
Sber AI Lab
data science
M
Maxim Savchenko
Sber AI Lab
Andrey Savchenko
Andrey Savchenko
Sber AI Lab; HSE University - Nizhny Novgorod
Computer VisionPattern RecognitionMachine LearningSpeech ProcessingImage Processing
Serguei Barannikov
Serguei Barannikov
Unknown affiliation
Machine LearningTopologyGeometryMathematical Physics
Alexey Zaytsev
Alexey Zaytsev
Associate professor at BIMSA
Deep learningMachine learningStatistics