CARE: Decoding Time Safety Alignment via Rollback and Introspection Intervention

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of balancing safety and generation quality during the decoding phase of large language models (LLMs), this paper proposes CARE: a real-time safety intervention framework. CARE employs a lightweight guard model for on-the-fly safety detection, integrates token-level rollback to correct potentially harmful outputs, and introduces introspective intervention—where critical self-reflection generates interpretable correction signals dynamically incorporated into the context to guide subsequent decoding. This enables fine-grained, low-perturbation, dynamic alignment during decoding. Experiments demonstrate that CARE reduces harmful response rates by an average of 42.3%, preserves generation quality (no degradation in BLEU/ROUGE scores), and maintains decoding efficiency (latency increase <8%). To our knowledge, CARE is the first decoding-phase method to simultaneously achieve safety, fluency, and efficiency.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) are increasingly deployed in real-world applications, ensuring the safety of their outputs during decoding has become a critical challenge. However, existing decoding-time interventions, such as Contrastive Decoding, often force a severe trade-off between safety and response quality. In this work, we propose CARE, a novel framework for decoding-time safety alignment that integrates three key components: (1) a guard model for real-time safety monitoring, enabling detection of potentially unsafe content; (2) a rollback mechanism with a token buffer to correct unsafe outputs efficiently at an earlier stage without disrupting the user experience; and (3) a novel introspection-based intervention strategy, where the model generates self-reflective critiques of its previous outputs and incorporates these reflections into the context to guide subsequent decoding steps. The framework achieves a superior safety-quality trade-off by using its guard model for precise interventions, its rollback mechanism for timely corrections, and our novel introspection method for effective self-correction. Experimental results demonstrate that our framework achieves a superior balance of safety, quality, and efficiency, attaining a low harmful response rate and minimal disruption to the user experience while maintaining high response quality.
Problem

Research questions and friction points this paper is trying to address.

Ensuring safety of LLM outputs during decoding
Addressing trade-off between safety and response quality
Correcting unsafe outputs efficiently without disrupting user experience
Innovation

Methods, ideas, or system contributions that make the work stand out.

Real-time safety monitoring with guard model
Rollback mechanism for efficient token correction
Introspection intervention for self-reflective critique generation