🤖 AI Summary
Large language models (LLMs) frequently fail when repeating a single token, generating irrelevant outputs—a critical behavioral fragility. This work establishes, for the first time, a causal link between this failure and “attention sinks”: abnormally high self-attention scores at initial tokens disrupt the long-range repetition circuitry. Leveraging interpretability-driven techniques—including neural circuit localization, attention diagnostics, and internal activation tracing—we precisely identify the responsible modules and design a targeted, performance-preserving fine-tuning patch. The intervention fully eliminates repetition errors without degrading model capabilities on standard benchmarks. We further generalize the analysis to diverse non-repetition anomalous sequences, confirming the mechanism’s broad applicability. This study provides the first causal, circuit-level explanation of a foundational LLM behavioral deficiency and delivers a reliable, interpretable intervention—advancing both mechanistic understanding and robustness engineering of LLMs.
📝 Abstract
Large Language Models (LLMs), despite their impressive capabilities, often fail to accurately repeat a single word when prompted to, and instead output unrelated text. This unexplained failure mode represents a vulnerability, allowing even end-users to diverge models away from their intended behavior. We aim to explain the causes for this phenomenon and link it to the concept of ``attention sinks'', an emergent LLM behavior crucial for fluency, in which the initial token receives disproportionately high attention scores. Our investigation identifies the neural circuit responsible for attention sinks and shows how long repetitions disrupt this circuit. We extend this finding to other non-repeating sequences that exhibit similar circuit disruptions. To address this, we propose a targeted patch that effectively resolves the issue without negatively impacting the model's overall performance. This study provides a mechanistic explanation for an LLM vulnerability, demonstrating how interpretability can diagnose and address issues, and offering insights that pave the way for more secure and reliable models.