🤖 AI Summary
This work addresses the safety and robustness of large language models under adversarial attacks and in-distribution errors by proposing a reinforcement learning framework grounded in fallback signals. The approach employs a critic module that dynamically detects safety violations and triggers a “fallback-by-x-tokens” mechanism, endowing the model with self-recovery capabilities for safe generation. Additionally, an enhanced supervised fine-tuning strategy, termed BSAFE+, is introduced to efficiently construct high-quality training data. Experimental results demonstrate that the proposed method substantially reduces the success rates of sophisticated adversarial attacks—including GCG and insertion-based attacks—across diverse model scales and benchmarks, while effectively preserving the model’s core language capabilities.
📝 Abstract
Addressing the critical need for robust safety in Large Language Models (LLMs), particularly against adversarial attacks and in-distribution errors, we introduce Reinforcement Learning with Backtracking Feedback (RLBF). This framework advances upon prior methods, such as BSAFE, by primarily leveraging a Reinforcement Learning (RL) stage where models learn to dynamically correct their own generation errors. Through RL with critic feedback on the model's live outputs, LLMs are trained to identify and recover from their actual, emergent safety violations by emitting an efficient"backtrack by x tokens"signal, then continuing generation autoregressively. This RL process is crucial for instilling resilience against sophisticated adversarial strategies, including middle filling, Greedy Coordinate Gradient (GCG) attacks, and decoding parameter manipulations. To further support the acquisition of this backtracking capability, we also propose an enhanced Supervised Fine-Tuning (SFT) data generation strategy (BSAFE+). This method improves upon previous data creation techniques by injecting violations into coherent, originally safe text, providing more effective initial training for the backtracking mechanism. Comprehensive empirical evaluations demonstrate that RLBF significantly reduces attack success rates across diverse benchmarks and model scales, achieving superior safety outcomes while critically preserving foundational model utility.