🤖 AI Summary
This work addresses the hallucination problem in large language models (LLMs), which often generate plausible yet factually incorrect responses in tasks such as question answering. The study reveals, for the first time, that a prevalent uniform attention pattern in shallow layers of LLMs is a key contributor to such hallucinations. To mitigate this issue, the authors propose Attention Replacement Technique (ART), a training-free method that substitutes uniform attention in early layers with localized attention, thereby steering the model to focus on relevant contextual information. ART requires neither fine-tuning nor additional data and demonstrates consistent reductions in hallucination rates across multiple mainstream LLM architectures, confirming both its effectiveness and architectural generality.
📝 Abstract
Hallucination in large language models (LLMs) continues to be a significant issue, particularly in tasks like question answering, where models often generate plausible yet incorrect or irrelevant information. Although various methods have been proposed to mitigate hallucinations, the relationship between attention patterns and hallucinations has not been fully explored. In this paper, we analyze the distribution of attention scores across each layer and attention head of LLMs, revealing a common and intriguing phenomenon: shallow layers of LLMs primarily rely on uniform attention patterns, where the model distributes its attention evenly across the entire sequence. This uniform attention pattern can lead to hallucinations, as the model fails to focus on the most relevant information. To mitigate this issue, we propose a training-free method called Attention Replacement Technique (ART), which replaces these uniform attention patterns in the shallow layers with local attention patterns. This change directs the model to focus more on the relevant contexts, thus reducing hallucinations. Through extensive experiments, ART demonstrates significant reductions in hallucinations across multiple LLM architectures, proving its effectiveness and generalizability without requiring fine-tuning or additional training data.