Relational Linearity is a Predictor of Hallucinations

📅 2026-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the susceptibility of large language models to hallucination when encountering unknown entities and their limited ability to accurately assess the boundaries of their own knowledge. The authors propose “relational linearity,” quantified as Δcos, as a novel metric to measure the abstractness of knowledge representations. To systematically investigate its relationship with hallucination rates, they construct SyntHal, a dataset comprising 6,000 synthetic entities. Experiments across four mainstream models reveal a strong positive correlation between relational linearity and hallucination rate (r ∈ [0.78, 0.82]), offering the first evidence that the structural organization of stored knowledge fundamentally influences a model’s self-awareness. This finding provides a new perspective for understanding and mitigating hallucinations in large language models.

Technology Category

Application Category

📝 Abstract
Hallucination is a central failure mode in large language models (LLMs). We focus on hallucinations of answers to questions like:"Which instrument did Glenn Gould play?", but we ask these questions for synthetic entities that are unknown to the model. Surprisingly, we find that medium-size models like Gemma-7B-IT frequently hallucinate, i.e., they have difficulty recognizing that the hallucinated fact is not part of their knowledge. We hypothesize that an important factor in causing these hallucinations is the linearity of the relation: linear relations tend to be stored more abstractly, making it difficult for the LLM to assess its knowledge; the facts of nonlinear relations tend to be stored more directly, making knowledge assessment easier. To investigate this hypothesis, we create SyntHal, a dataset of 6000 synthetic entities for six relations. In our experiments with four models, we determine, for each relation, the hallucination rate on SyntHal and also measure its linearity, using $\Delta\cos$. We find a strong correlation ($r \in [.78,.82]$) between relational linearity and hallucination rate, providing evidence for our hypothesis that the underlying storage of triples of a relation is a factor in how well a model can self-assess its knowledge. This finding has implications for how to manage hallucination behavior and suggests new research directions for improving the representation of factual knowledge in LLMs.
Problem

Research questions and friction points this paper is trying to address.

hallucination
relational linearity
large language models
knowledge assessment
synthetic entities
Innovation

Methods, ideas, or system contributions that make the work stand out.

relational linearity
hallucination
knowledge representation
synthetic entities
self-assessment
🔎 Similar Papers
No similar papers found.
Y
Yuetian Lu
Center for Information and Language Processing (CIS), LMU Munich, Germany; Ubiquitous Knowledge Processing (UKP) Lab, TU Darmstadt, Germany; Munich Center for Machine Learning (MCML), Germany
Yihong Liu
Yihong Liu
CIS, LMU Munich
Natural Language ProcessingComputational LinguisticsMultilinguality
Hinrich Schütze
Hinrich Schütze
University of Munich
natural language processing