🤖 AI Summary
To address the “alignment tax”—the degradation of general capabilities in large language models (LLMs) during reinforcement learning–based safety alignment—we propose Null-Space Policy Optimization (NSPO), a novel paradigm. Methodologically, NSPO geometrically projects the safety-oriented policy gradient onto the null space of gradients from general-task objectives, thereby theoretically guaranteeing orthogonality between safety optimization and capability preservation. It integrates gradient orthogonal decomposition, safety reward modeling, and theory-driven constrained optimization. Empirically, NSPO achieves zero accuracy loss across diverse downstream tasks—including mathematical reasoning, code generation, and instruction following—while attaining state-of-the-art safety performance. Notably, it reaches optimal safety–capability trade-off using only 40% of the PKU-SafeRLHF human preference annotations, demonstrating significant data efficiency. This work bridges theoretical rigor with practical efficacy in safe LLM alignment.
📝 Abstract
As Large Language Models (LLMs) are increasingly deployed in real-world applications, it is important to ensure their behaviors align with human values, societal norms, and ethical principles. However, safety alignment under Reinforcement Learning (RL) often suffers from forgetting learned general abilities, which is also known as the alignment tax. To address this issue, we introduce Null-Space constrained Policy Optimization (NSPO), a novel RL framework for LLM safety alignment while preserving their core abilities. The safety policy gradients are geometrically projected into the null space of general tasks, thereby mitigating the safety alignment tax. In addition, we theoretically prove that NSPO preserves the model's original core capabilities, while still guaranteeing a descent direction for effective safety alignment. Extensive experiments demonstrate that NSPO outperforms existing methods by a large margin, achieving state-of-the-art safety performance without sacrificing accuracy on general tasks, including math, code, and instruction-following tasks. Notably, NSPO is data-efficient and only requires 40% of public human-annotated safety data from PKU-SafeRLHF to achieve promising safety performance, without a large amount of mixed general tasks data in existing alignment methods.