THINKSAFE: Self-Generated Safety Alignment for Reasoning Models

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of large reasoning models in reinforcement learning alignment, where excessive emphasis on instruction following compromises robustness against harmful prompts, and existing external-teacher distillation methods often distort the model’s original reasoning distribution. To overcome these limitations, we propose a self-generated safety alignment framework that requires no external teacher. By introducing a lightweight rejection-guided mechanism, our approach activates the model’s intrinsic safety knowledge to autonomously generate safe reasoning trajectories that conform to its native distribution, followed by distribution-preserving fine-tuning. This method is the first to leverage a model’s latent safety capabilities for alignment, achieving significant safety improvements on DeepSeek-R1-Distill and Qwen3 while maintaining reasoning performance comparable to GRPO and substantially reducing computational costs.

Technology Category

Application Category

📝 Abstract
Large reasoning models (LRMs) achieve remarkable performance by leveraging reinforcement learning (RL) on reasoning tasks to generate long chain-of-thought (CoT) reasoning. However, this over-optimization often prioritizes compliance, making models vulnerable to harmful prompts. To mitigate this safety degradation, recent approaches rely on external teacher distillation, yet this introduces a distributional discrepancy that degrades native reasoning. We propose ThinkSafe, a self-generated alignment framework that restores safety alignment without external teachers. Our key insight is that while compliance suppresses safety mechanisms, models often retain latent knowledge to identify harm. ThinkSafe unlocks this via lightweight refusal steering, guiding the model to generate in-distribution safety reasoning traces. Fine-tuning on these self-generated responses effectively realigns the model while minimizing distribution shift. Experiments on DeepSeek-R1-Distill and Qwen3 show ThinkSafe significantly improves safety while preserving reasoning proficiency. Notably, it achieves superior safety and comparable reasoning to GRPO, with significantly reduced computational cost. Code, models, and datasets are available at https://github.com/seanie12/ThinkSafe.git.
Problem

Research questions and friction points this paper is trying to address.

safety alignment
reasoning models
harmful prompts
compliance
distributional discrepancy
Innovation

Methods, ideas, or system contributions that make the work stand out.

self-generated alignment
safety steering
reasoning models
refusal guidance
distribution shift
🔎 Similar Papers
No similar papers found.