Mitigating Safety Tax via Distribution-Grounded Refinement in Large Reasoning Models

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant degradation in general reasoning capabilities—commonly referred to as the “safety tax”—induced by safety alignment in large reasoning models. We propose DGR, a method that mitigates this issue by refining external safety-aligned reasoning data and aligning its distribution with the internal reasoning trajectories of the target model. For the first time, we identify the mismatch between safety data and the model’s internal distribution as a key cause of performance degradation and introduce a distribution-aware data refinement mechanism. Remarkably, experiments show that as few as 10 refined samples suffice to elicit robust safety refusal behavior. On DirectRefusal and R1-ACT benchmarks, DGR improves average reasoning accuracy by 30.2% and 21.2%, respectively, over vanilla supervised fine-tuning while preserving high safety performance.

Technology Category

Application Category

📝 Abstract
Safety alignment incurs safety tax that perturbs a large reasoning model's (LRM) general reasoning ability. Existing datasets used for safety alignment for an LRM are usually constructed by distilling safety reasoning traces and answers from an external LRM or human labeler. However, such reasoning traces and answers exhibit a distributional gap with the target LRM that needs alignment, and we conjecture such distributional gap is the culprit leading to significant degradation of reasoning ability of the target LRM. Driven by this hypothesis, we propose a safety alignment dataset construction method, dubbed DGR. DGR transforms and refines an existing out-of-distributional safety reasoning dataset to be aligned with the target's LLM inner distribution. Experimental results demonstrate that i) DGR effectively mitigates the safety tax while maintaining safety performance across all baselines, i.e., achieving \textbf{+30.2\%} on DirectRefusal and \textbf{+21.2\%} on R1-ACT improvement in average reasoning accuracy compared to Vanilla SFT; ii) the degree of reasoning degradation correlates with the extent of distribution shift, suggesting that bridging this gap is central to preserving capabilities. Furthermore, we find that safety alignment in LRMs may primarily function as a mechanism to activate latent knowledge, as a mere \textbf{10} samples are sufficient for activating effective refusal behaviors. These findings not only emphasize the importance of distributional consistency but also provide insights into the activation mechanism of safety in reasoning models.
Problem

Research questions and friction points this paper is trying to address.

safety tax
distributional gap
safety alignment
reasoning degradation
large reasoning models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distribution-Grounded Refinement
Safety Alignment
Safety Tax
Large Reasoning Models
Distribution Shift
🔎 Similar Papers
No similar papers found.
Y
Yingsha Xie
School of Cyber Science and Technology, Shenzhen Campus of Sun Yat-sen University, China
Tiansheng Huang
Tiansheng Huang
Georgia Institute of Technology
Parallel and Distributed ComputingDistributed machine learningLLM safety
E
Enneng Yang
School of Cyber Science and Technology, Shenzhen Campus of Sun Yat-sen University, China
Rui Min
Rui Min
Hong Kong University of Science and Technology
Machine LearningAgentTrustworthy AI
W
Wenjie Lu
Didi International Business Group
Xiaochun Cao
Xiaochun Cao
Sun Yat-sen University
Computer VisionArtificial IntelligenceMultimediaMachine Learning
N
Naiqiang Tan
Didi International Business Group
Li Shen
Li Shen
Associate Professor, Sun Yat-sen University
Machine LearningOptimization