🤖 AI Summary
This work addresses the significant degradation in general reasoning capabilities—commonly referred to as the “safety tax”—induced by safety alignment in large reasoning models. We propose DGR, a method that mitigates this issue by refining external safety-aligned reasoning data and aligning its distribution with the internal reasoning trajectories of the target model. For the first time, we identify the mismatch between safety data and the model’s internal distribution as a key cause of performance degradation and introduce a distribution-aware data refinement mechanism. Remarkably, experiments show that as few as 10 refined samples suffice to elicit robust safety refusal behavior. On DirectRefusal and R1-ACT benchmarks, DGR improves average reasoning accuracy by 30.2% and 21.2%, respectively, over vanilla supervised fine-tuning while preserving high safety performance.
📝 Abstract
Safety alignment incurs safety tax that perturbs a large reasoning model's (LRM) general reasoning ability. Existing datasets used for safety alignment for an LRM are usually constructed by distilling safety reasoning traces and answers from an external LRM or human labeler. However, such reasoning traces and answers exhibit a distributional gap with the target LRM that needs alignment, and we conjecture such distributional gap is the culprit leading to significant degradation of reasoning ability of the target LRM. Driven by this hypothesis, we propose a safety alignment dataset construction method, dubbed DGR. DGR transforms and refines an existing out-of-distributional safety reasoning dataset to be aligned with the target's LLM inner distribution. Experimental results demonstrate that i) DGR effectively mitigates the safety tax while maintaining safety performance across all baselines, i.e., achieving \textbf{+30.2\%} on DirectRefusal and \textbf{+21.2\%} on R1-ACT improvement in average reasoning accuracy compared to Vanilla SFT; ii) the degree of reasoning degradation correlates with the extent of distribution shift, suggesting that bridging this gap is central to preserving capabilities. Furthermore, we find that safety alignment in LRMs may primarily function as a mechanism to activate latent knowledge, as a mere \textbf{10} samples are sufficient for activating effective refusal behaviors. These findings not only emphasize the importance of distributional consistency but also provide insights into the activation mechanism of safety in reasoning models.