Hallucination, Monofacts, and Miscalibration: An Empirical Investigation

📅 2025-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the intrinsic relationship among hallucination in large language models (LLMs), the monofact rate (i.e., the proportion of training instances containing a single verifiable fact), and model miscalibration. Using n-gram modeling, in-context learning (ICL), Good-Turing missing-mass estimation, and controllable sample-weighting calibration, we empirically establish that the hallucination rate admits a theoretical lower bound equal to the monofact rate minus the calibration error. Crucially, when the monofact rate is held constant, improving calibration significantly reduces hallucination. We thus reveal, for the first time, the fundamental calibration–hallucination trade-off. Furthermore, we propose a novel, frequency-distribution-aware selective data repetition paradigm—grounded in principled statistical reasoning—to mitigate hallucination, challenging prevailing aggressive deduplication practices in LLM pretraining.

Technology Category

Application Category

📝 Abstract
Recent theoretical work by [Kalai and Vempala 2024] proves that a particular notion of hallucination rate in LLMs must be lower bounded by the training data monofact rate (related to the classical Good-Turing missing mass estimator) minus model miscalibration. Through systematic experiments with n-gram models and in-context learning with LLMs, we empirically investigate and validate this theory by examining how different underlying data distributions affect the monofact rate and a model's tendency to hallucinate. We then vary model miscalibration through controlled upweighting of training samples while holding monofact rates constant, allowing us to isolate miscalibration's reduction effect on hallucination. These findings suggest that both the distribution of fact frequencies in training data and the calibration-hallucination trade-off are inherent to probabilistic language generation. Our results also suggest that current practices of aggressive deduplication in training data may need to be reconsidered, as selective duplication could serve as a principled mechanism for reducing hallucination.
Problem

Research questions and friction points this paper is trying to address.

Investigate hallucination rate in LLMs
Examine data distribution effects on monofact rate
Isolate miscalibration's effect on hallucination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Empirically validates hallucination rate theory
Isolates miscalibration's effect on hallucination
Suggests selective duplication reduces hallucination
🔎 Similar Papers
No similar papers found.