Hallucination is Inevitable: An Innate Limitation of Large Language Models

📅 2024-01-22
🏛️ arXiv.org
📈 Citations: 142
Influential: 3
📄 PDF
🤖 AI Summary
This paper investigates whether hallucinations in large language models (LLMs) can be fundamentally eliminated. Drawing on computability theory, it formally defines hallucination as an inherent inconsistency between model outputs and computable functions, and rigorously proves that—within the framework of computability—LLMs, as universal problem solvers, inevitably exhibit irreducible hallucinations because they cannot learn the entire class of computable functions. The methodology integrates computability theory, statistical learning theory, formal semantic modeling, time-complexity-constrained analysis, and empirical validation. Key contributions are: (1) establishing hallucination as a fundamental, intrinsic limitation of LLMs; (2) characterizing task classes prone to hallucination and delineating their theoretical boundaries; and (3) exposing the essential limitations and scope of applicability of existing mitigation techniques.

Technology Category

Application Category

📝 Abstract
Hallucination has been widely recognized to be a significant drawback for large language models (LLMs). There have been many works that attempt to reduce the extent of hallucination. These efforts have mostly been empirical so far, which cannot answer the fundamental question whether it can be completely eliminated. In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs. Specifically, we define a formal world where hallucination is defined as inconsistencies between a computable LLM and a computable ground truth function. By employing results from learning theory, we show that LLMs cannot learn all the computable functions and will therefore inevitably hallucinate if used as general problem solvers. Since the formal world is a part of the real world which is much more complicated, hallucinations are also inevitable for real world LLMs. Furthermore, for real world LLMs constrained by provable time complexity, we describe the hallucination-prone tasks and empirically validate our claims. Finally, using the formal world framework, we discuss the possible mechanisms and efficacies of existing hallucination mitigators as well as the practical implications on the safe deployment of LLMs.
Problem

Research questions and friction points this paper is trying to address.

Hallucination inevitable in LLMs
LLMs cannot learn all computable functions
Hallucination-prone tasks in real-world LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Formalizes hallucination in LLMs
Employs learning theory results
Describes hallucination-prone tasks empirically
🔎 Similar Papers
No similar papers found.
Ziwei Xu
Ziwei Xu
National University of Singapore
Machine LearningKnowledge RepresentationAI Safety
S
Sanjay Jain
School of Computing, National University of Singapore
M
Mohan S. Kankanhalli
School of Computing, National University of Singapore