Self-Guard: Defending Large Reasoning Models via enhanced self-reflection

πŸ“… 2026-01-31
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the vulnerability of large reasoning models (LRMs) to manipulation and information leakage during explicit reasoning, where safety awareness often fails to translate into compliant behavior. To bridge this gap, we propose Self-Guardβ€”a lightweight, training-free, and intervention-free defense framework that leverages safety-oriented prompts to elicit model introspection. By dynamically identifying, steering, and amplifying safety-relevant directions in the hidden-state representations, Self-Guard enables real-time alignment of the reasoning process with safety constraints. Experimental results demonstrate that Self-Guard significantly enhances model safety without compromising utility, exhibiting strong generalization across unseen risks and model scales, thereby effectively closing the gap between safety awareness and behavioral compliance.

Technology Category

Application Category

πŸ“ Abstract
The emergence of Large Reasoning Models (LRMs) introduces a new paradigm of explicit reasoning, enabling remarkable advances yet posing unique risks such as reasoning manipulation and information leakage. To mitigate these risks, current alignment strategies predominantly rely on heavy post-training paradigms or external interventions. However, these approaches are often computationally intensive and fail to address the inherent awareness-compliance gap, a critical misalignment where models recognize potential risks yet prioritize following user instructions due to their sycophantic tendencies. To address these limitations, we propose Self-Guard, a lightweight safety defense framework that reinforces safety compliance at the representational level. Self-Guard operates through two principal stages: (1) safety-oriented prompting, which activates the model's latent safety awareness to evoke spontaneous reflection, and (2) safety activation steering, which extracts the resulting directional shift in the hidden state space and amplifies it to ensure that safety compliance prevails over sycophancy during inference. Experiments demonstrate that Self-Guard effectively bridges the awareness-compliance gap, achieving robust safety performance without compromising model utility. Furthermore, Self-Guard exhibits strong generalization across diverse unseen risks and varying model scales, offering a cost-efficient solution for LRM safety alignment.
Problem

Research questions and friction points this paper is trying to address.

Large Reasoning Models
reasoning manipulation
information leakage
awareness-compliance gap
safety alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Guard
Large Reasoning Models
safety alignment
self-reflection
awareness-compliance gap
πŸ”Ž Similar Papers
No similar papers found.