🤖 AI Summary
LLM hallucinations impede reliable deployment; existing taxonomies largely rely on external behavioral characterizations, neglecting intrinsic mechanistic distinctions. This work proposes the first hallucination taxonomy grounded in two internal dimensions: model “knowledge” and “certainty.” We construct a model-specific dataset to distinguish knowledge-deficiency hallucinations from certainty-bias hallucinations. Using steering vector interventions, we validate the knowledge axis; for the certainty axis, we design a novel confidence–correctness joint evaluation metric. Our analysis reveals fundamental differences between the two types: knowledge-deficiency hallucinations stem from factual gaps, whereas certainty-bias hallucinations arise from overconfident yet incorrect outputs—i.e., “knowing incorrectly but believing confidently.” Critically, prevailing mitigation methods fail significantly on the latter. This study establishes a mechanistic foundation for targeted hallucination mitigation, offering both theoretical insights and empirical evidence for developing model-internal, dimension-aware interventions.
📝 Abstract
Hallucinations in LLMs present a critical barrier to their reliable usage. Existing research usually categorizes hallucination by their external properties rather than by the LLMs' underlying internal properties. This external focus overlooks that hallucinations may require tailored mitigation strategies based on their underlying mechanism. We propose a framework for categorizing hallucinations along two axes: knowledge and certainty. Since parametric knowledge and certainty may vary across models, our categorization method involves a model-specific dataset construction process that differentiates between those types of hallucinations. Along the knowledge axis, we distinguish between hallucinations caused by a lack of knowledge and those occurring despite the model having the knowledge of the correct response. To validate our framework along the knowledge axis, we apply steering mitigation, which relies on the existence of parametric knowledge to manipulate model activations. This addresses the lack of existing methods to validate knowledge categorization by showing a significant difference between the two hallucination types. We further analyze the distinct knowledge and hallucination patterns between models, showing that different hallucinations do occur despite shared parametric knowledge. Turning to the certainty axis, we identify a particularly concerning subset of hallucinations where models hallucinate with certainty despite having the correct knowledge internally. We introduce a new evaluation metric to measure the effectiveness of mitigation methods on this subset, revealing that while some methods perform well on average, they fail disproportionately on these critical cases. Our findings highlight the importance of considering both knowledge and certainty in hallucination analysis and call for targeted mitigation approaches that consider the hallucination underlying factors.