🤖 AI Summary
Instruction fine-tuning (IFT) enhances large language models’ (LLMs) task performance but severely degrades their ability to refuse malicious instructions—a decline rooted in the drift of the “refusal direction” (r-direction) in hidden state space during training. This work is the first to systematically characterize this geometric drift phenomenon. We propose ProCon, a novel method that explicitly identifies the r-direction via hidden-state analysis and enforces its preservation through a projection-constraint loss. During early IFT, ProCon applies progressive, strong regularization via warm-up scheduling and data augmentation to anchor the r-direction. Experiments across multiple LLMs and safety benchmarks demonstrate that ProCon significantly improves refusal capability without compromising downstream task performance—outperforming existing defense methods. Moreover, it provides an interpretable, geometric perspective on LLM safety mechanisms, linking alignment behavior to latent-space geometry.
📝 Abstract
Instruction Fine-Tuning (IFT) has been widely adopted as an effective post-training strategy to enhance various abilities of Large Language Models (LLMs). However, prior studies have shown that IFT can significantly compromise LLMs' safety, particularly their ability to refuse malicious instructions, raising significant concerns. Recent research into the internal mechanisms of LLMs has identified the refusal direction (r-direction) in the hidden states, which plays a pivotal role in governing refusal behavior. Building on this insight, our study reveals that the r-direction tends to drift during training, which we identify as one of the causes of the associated safety risks. To mitigate such drift, our proposed ProCon method introduces a projection-constrained loss term that regularizes the projection magnitude of each training sample's hidden state onto the r-direction. Our initial analysis shows that applying an appropriate constraint can effectively mitigate the refusal direction drift and associated safety risks, but remains limited by overall performance barriers. To overcome this barrier, informed by our observation of early-stage sharp drift and a data-driven perspective, we introduce a warm-up strategy that emphasizes early-stage strong constraints and broaden the data distribution to strengthen constraint signals, leading to an enhanced ProCon method. Experimental results under various datasets, scenarios, and LLMs demonstrate that our method can significantly mitigate safety risks posed by IFT while preserving task performance gains. Even compared with strong baselines, our method consistently delivers superior overall performance. Crucially, our analysis indicates that ProCon can contribute to stabilizing the r-direction during training, while such an interpretability-driven exploration of LLMs' internal mechanisms lays a solid foundation for future safety research.