Anchoring Refusal Direction: Mitigating Safety Risks in Tuning via Projection Constraint

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Instruction fine-tuning (IFT) enhances large language models’ (LLMs) task performance but severely degrades their ability to refuse malicious instructions—a decline rooted in the drift of the “refusal direction” (r-direction) in hidden state space during training. This work is the first to systematically characterize this geometric drift phenomenon. We propose ProCon, a novel method that explicitly identifies the r-direction via hidden-state analysis and enforces its preservation through a projection-constraint loss. During early IFT, ProCon applies progressive, strong regularization via warm-up scheduling and data augmentation to anchor the r-direction. Experiments across multiple LLMs and safety benchmarks demonstrate that ProCon significantly improves refusal capability without compromising downstream task performance—outperforming existing defense methods. Moreover, it provides an interpretable, geometric perspective on LLM safety mechanisms, linking alignment behavior to latent-space geometry.

Technology Category

Application Category

📝 Abstract
Instruction Fine-Tuning (IFT) has been widely adopted as an effective post-training strategy to enhance various abilities of Large Language Models (LLMs). However, prior studies have shown that IFT can significantly compromise LLMs' safety, particularly their ability to refuse malicious instructions, raising significant concerns. Recent research into the internal mechanisms of LLMs has identified the refusal direction (r-direction) in the hidden states, which plays a pivotal role in governing refusal behavior. Building on this insight, our study reveals that the r-direction tends to drift during training, which we identify as one of the causes of the associated safety risks. To mitigate such drift, our proposed ProCon method introduces a projection-constrained loss term that regularizes the projection magnitude of each training sample's hidden state onto the r-direction. Our initial analysis shows that applying an appropriate constraint can effectively mitigate the refusal direction drift and associated safety risks, but remains limited by overall performance barriers. To overcome this barrier, informed by our observation of early-stage sharp drift and a data-driven perspective, we introduce a warm-up strategy that emphasizes early-stage strong constraints and broaden the data distribution to strengthen constraint signals, leading to an enhanced ProCon method. Experimental results under various datasets, scenarios, and LLMs demonstrate that our method can significantly mitigate safety risks posed by IFT while preserving task performance gains. Even compared with strong baselines, our method consistently delivers superior overall performance. Crucially, our analysis indicates that ProCon can contribute to stabilizing the r-direction during training, while such an interpretability-driven exploration of LLMs' internal mechanisms lays a solid foundation for future safety research.
Problem

Research questions and friction points this paper is trying to address.

Instruction fine-tuning compromises LLM safety against malicious instructions
Refusal direction drift during training causes safety risks
Mitigating safety degradation while preserving task performance gains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Projection-constrained loss regularizes hidden states
Warm-up strategy with early strong constraints
Data distribution broadening strengthens constraint signals
🔎 Similar Papers
No similar papers found.
Yanrui Du
Yanrui Du
Harbin Institute of Technology
LLMsSafetyMedical Domain
F
Fenglei Fan
City University of Hong Kong, Hong Kong
Sendong Zhao
Sendong Zhao
Harbin Institute of Technology
BioNLPLarge Language Model
J
Jiawei Cao
SCIR Lab, Harbin Institute of Technology, China
Qika Lin
Qika Lin
National University of Singapore | NTU | XJTU | BIT
Knowledge ReasoningNeurosymbolic AIMulti-modalRobustness & SecurityAI for Healthcare
K
Kai He
National University of Singapore, Singapore
T
Ting Liu
SCIR Lab, Harbin Institute of Technology, China
Bing Qin
Bing Qin
Professor in Harbin Institute of Technology
Natural Language ProcessingInformation ExtractionSentiment Analysis
M
Mengling Feng
National University of Singapore, Singapore