🤖 AI Summary
This work addresses the vulnerability of large reasoning models to diverse jailbreaking attacks, which often stems from insufficient generalization in safety-aware reasoning, leading to ineffective rejection of adversarial harmful prompts. To tackle this challenge, we propose the Risk-Aware Preference Optimization (RAPO) framework, which introduces, for the first time, a risk-aware mechanism into preference optimization. RAPO dynamically identifies and responds to safety risks during reasoning through chain-of-thought reasoning and adaptive safety alignment, enabling fine-grained control over model responses. Experimental results demonstrate that RAPO significantly enhances the generalization of defense capabilities against complex jailbreaking attacks across multiple large reasoning models, while preserving their general-purpose performance without degradation.
📝 Abstract
Large Reasoning Models (LRMs) have achieved tremendous success with their chain-of-thought (CoT) reasoning, yet also face safety issues similar to those of basic language models. In particular, while algorithms are designed to guide them to deliberately refuse harmful prompts with safe reasoning, this process often fails to generalize against diverse and complex jailbreak attacks. In this work, we attribute these failures to the generalization of the safe reasoning process, particularly their insufficiency against complex attack prompts. We provide both theoretical and empirical evidence to show the necessity of a more sufficient safe reasoning process to defend against advanced attack prompts. Building on this insight, we propose a Risk-Aware Preference Optimization (RAPO) framework that enables LRM to adaptively identify and address the safety risks with appropriate granularity in its thinking content. Extensive experiments demonstrate that RAPO successfully generalizes multiple LRMs'safe reasoning adaptively across diverse attack prompts whilst preserving general utility, contributing a robust alignment technique for LRM safety. Our code is available at https://github.com/weizeming/RAPO.