π€ AI Summary
This work addresses the challenge that large reasoning models (LRMs) may amplify unsafe behaviors during explicit reasoning, a risk inadequately mitigated by existing defenses due to their neglect of the dynamic nature of the reasoning process. To this end, we propose SafeRemind, a fine-tuning-free, inference-time defense mechanism that dynamically injects safety reminders at critical decision points during decoding, triggered by an entropy-based signal. Our approach is the first to demonstrate the efficacy of safety prompts embedded within intermediate reasoning steps. Extensive evaluation across five LRMs and six benchmarks shows that SafeRemind improves safety by up to 45.5 percentage points while preserving the modelβs core reasoning capabilities.
π Abstract
Large Reasoning Models (LRMs) achieve remarkable success through explicit thinking steps, yet the thinking steps introduce a novel risk by potentially amplifying unsafe behaviors. Despite this vulnerability, conventional defense mechanisms remain ineffective as they overlook the unique reasoning dynamics of LRMs. In this work, we find that the emergence of safe-reminding phrases within thinking steps plays a pivotal role in ensuring LRM safety. Motivated by this finding, we propose SafeRemind, a decoding-time defense method that dynamically injects safe-reminding phrases into thinking steps. By leveraging entropy triggers to intervene at decision-locking points, SafeRemind redirects potentially harmful trajectories toward safer outcomes without requiring any parameter updates. Extensive evaluations across five LRMs and six benchmarks demonstrate that SafeRemind substantially enhances safety, achieving improvements of up to 45.5%p while preserving core reasoning utility.