🤖 AI Summary
This study addresses the challenge of hallucinations in small-scale language models during multi-turn or agent-based dialogues, where fluent yet factually incorrect responses undermine reliability. The work introduces a novel geometric perspective by modeling hallucinations through the lens of embedding space structure, revealing that truthful responses exhibit significantly tighter clustering than hallucinated ones. Building on this insight, the authors propose a label-efficient approach that combines a cluster compactness metric with a propagation-based classification algorithm. Requiring only 30–50 annotated samples, the method achieves high-precision hallucination detection across large response sets, attaining F1 scores exceeding 90%. This approach departs from conventional paradigms centered on knowledge verification or single-turn evaluation, offering a scalable and data-efficient pathway for hallucination detection.
📝 Abstract
Hallucinations -- fluent but factually incorrect responses -- pose a major challenge to the reliability of language models, especially in multi-step or agentic settings. This work investigates hallucinations in small-sized LLMs through a geometric perspective, starting from the hypothesis that when models generate multiple responses to the same prompt, genuine ones exhibit tighter clustering in the embedding space, we prove this hypothesis and, leveraging this geometrical insight, we also show that it is possible to achieve a consistent level of separability. This latter result is used to introduce a label-efficient propagation method that classifies large collections of responses from just 30-50 annotations, achieving F1 scores above 90%. Our findings, framing hallucinations from a geometric perspective in the embedding space, complement traditional knowledge-centric and single-response evaluation paradigms, paving the way for further research.