🤖 AI Summary
This paper addresses the pervasive “safety–capability trade-off” in large language model (LLM) fine-tuning—where improving downstream task performance often degrades safety alignment—by proposing Reinforcement Learning with Verifiable Rewards (RLVR), a novel paradigm. We theoretically prove, for the first time under KL-divergence constraints, that RLVR strictly prevents safety degradation. Empirically, we validate RLVR across multiple benchmarks, including five adversarial safety evaluations, demonstrating substantial gains in reasoning capability while maintaining or even improving safety alignment. Compared to supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF), RLVR uniquely integrates verifiability, safety preservation, and task performance—overcoming the inherent inverse relationship between safety and capability in conventional fine-tuning methods.
📝 Abstract
Fine-tuning large language models (LLMs) for downstream tasks typically exhibit a fundamental safety-capability tradeoff, where improving task performance degrades safety alignment even on benign datasets. This degradation persists across standard approaches including supervised finetuning (SFT) and reinforcement learning from human feedback (RLHF). While reinforcement learning with verifiable rewards (RLVR) has emerged as a promising alternative that optimizes models on objectively measurable tasks, its safety implications remain unexplored. We present the first comprehensive theoretical and empirical analysis of safety properties in RLVR. Theoretically, we derive upper bounds on safety drift under KL-constrained optimization and prove conditions under which safety degradation is eliminated. Empirically, we conduct extensive experiments across five adversarial safety benchmarks, demonstrating that RLVR can simultaneously enhance reasoning capabilities while maintaining or improving safety guardrails. Our comprehensive ablation studies examine the effects of optimization algorithms, model scale, and task domains. Our findings challenge the prevailing assumption of an inevitable safety capability trade-off, and establish that a specific training methodology can achieve both objectives simultaneously, providing insights for the safe deployment of reasoning-capable LLMs.