🤖 AI Summary
This work addresses the susceptibility of large language models to hallucination when encountering unknown entities and their limited ability to accurately assess the boundaries of their own knowledge. The authors propose “relational linearity,” quantified as Δcos, as a novel metric to measure the abstractness of knowledge representations. To systematically investigate its relationship with hallucination rates, they construct SyntHal, a dataset comprising 6,000 synthetic entities. Experiments across four mainstream models reveal a strong positive correlation between relational linearity and hallucination rate (r ∈ [0.78, 0.82]), offering the first evidence that the structural organization of stored knowledge fundamentally influences a model’s self-awareness. This finding provides a new perspective for understanding and mitigating hallucinations in large language models.
📝 Abstract
Hallucination is a central failure mode in large language models (LLMs). We focus on hallucinations of answers to questions like:"Which instrument did Glenn Gould play?", but we ask these questions for synthetic entities that are unknown to the model. Surprisingly, we find that medium-size models like Gemma-7B-IT frequently hallucinate, i.e., they have difficulty recognizing that the hallucinated fact is not part of their knowledge. We hypothesize that an important factor in causing these hallucinations is the linearity of the relation: linear relations tend to be stored more abstractly, making it difficult for the LLM to assess its knowledge; the facts of nonlinear relations tend to be stored more directly, making knowledge assessment easier. To investigate this hypothesis, we create SyntHal, a dataset of 6000 synthetic entities for six relations. In our experiments with four models, we determine, for each relation, the hallucination rate on SyntHal and also measure its linearity, using $\Delta\cos$. We find a strong correlation ($r \in [.78,.82]$) between relational linearity and hallucination rate, providing evidence for our hypothesis that the underlying storage of triples of a relation is a factor in how well a model can self-assess its knowledge. This finding has implications for how to manage hallucination behavior and suggests new research directions for improving the representation of factual knowledge in LLMs.