π€ AI Summary
Even after safety alignment, large language models remain vulnerable to continuation-triggered jailbreak attacks, yet the underlying mechanisms remain poorly understood. This work provides the first mechanistic explanation at the attention level, revealing that such jailbreaks arise from an inherent conflict between the modelβs intrinsic drive to continue text and the safety constraints imposed by alignment training. Through causal intervention, activation scaling, and head-level interpretability analyses, we identify critical safety-related attention heads and systematically compare their behaviors across different model architectures. Our findings not only elucidate the root cause of jailbreak vulnerabilities but also offer theoretical insights and practical guidance for enhancing model robustness against such attacks.
π Abstract
With the rapid advancement of large language models (LLMs), the safety of LLMs has become a critical concern. Despite significant efforts in safety alignment, current LLMs remain vulnerable to jailbreaking attacks. However, the root causes of such vulnerabilities are still poorly understood, necessitating a rigorous investigation into jailbreak mechanisms across both academic and industrial communities. In this work, we focus on a continuation-triggered jailbreak phenomenon, whereby simply relocating a continuation-triggered instruction suffix can substantially increase jailbreak success rates. To uncover the intrinsic mechanisms of this phenomenon, we conduct a comprehensive mechanistic interpretability analysis at the level of attention heads. Through causal interventions and activation scaling, we show that this jailbreak behavior primarily arises from an inherent competition between the model's intrinsic continuation drive and the safety defenses acquired through alignment training. Furthermore, we perform a detailed behavioral analysis of the identified safety-critical attention heads, revealing notable differences in the functions and behaviors of safety heads across different model architectures. These findings provide a novel mechanistic perspective for understanding and interpreting jailbreak behaviors in LLMs, offering both theoretical insights and practical implications for improving model safety.