Large Reasoning Models Learn Better Alignment from Flawed Thinking

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large reasoning models (LRMs) exhibit structured chain-of-thought (CoT) capabilities but lack safety-aligned, critical identification and correction of flawed premises, rendering them vulnerable to biases and jailbreaking attacks. To address this, we propose RECAP: a method that injects adversarial thinking prefixes to enhance model self-reflection within standard RLHF frameworks, coupled with hybrid reinforcement learning training using synthetically generated reverse-aligned CoT traces and conventional prompts to redirect responses toward safety. RECAP incurs no additional inference overhead, significantly improves robustness against adversarial attacks, reduces excessive refusal rates, maintains stable performance under multi-turn jailbreaking attempts, and preserves original reasoning quality. Its core innovation lies in internalizing flawed-premise detection and safety-aware correction as end-to-end trainable components of the reasoning process.

Technology Category

Application Category

📝 Abstract
Large reasoning models (LRMs) "think" by generating structured chain-of-thought (CoT) before producing a final answer, yet they still lack the ability to reason critically about safety alignment and are easily biased when a flawed premise is injected into their thought process. We propose RECAP (Robust Safety Alignment via Counter-Aligned Prefilling), a principled reinforcement learning (RL) method for post-training that explicitly teaches models to override flawed reasoning trajectories and reroute to safe and helpful responses. RECAP trains on a mixture of synthetically generated counter-aligned CoT prefills and standard prompts, requires no additional training cost or modifications beyond vanilla reinforcement learning from human feedback (RLHF), and substantially improves safety and jailbreak robustness, reduces overrefusal, and preserves core reasoning capability -- all while maintaining inference token budget. Extensive analysis shows that RECAP-trained models engage in self-reflection more frequently and remain robust under adaptive attacks, preserving safety even after repeated attempts to override their reasoning.
Problem

Research questions and friction points this paper is trying to address.

Models lack critical reasoning about safety alignment during chain-of-thought
Models are easily biased by flawed premises in reasoning process
Method teaches models to override unsafe reasoning and maintain safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning method for post-training safety alignment
Trains models with counter-aligned reasoning trajectories
Maintains safety without extra inference tokens