Mitigating the Safety Alignment Tax with Null-Space Constrained Policy Optimization

📅 2025-12-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the “alignment tax”—the degradation of general capabilities in large language models (LLMs) during reinforcement learning–based safety alignment—we propose Null-Space Policy Optimization (NSPO), a novel paradigm. Methodologically, NSPO geometrically projects the safety-oriented policy gradient onto the null space of gradients from general-task objectives, thereby theoretically guaranteeing orthogonality between safety optimization and capability preservation. It integrates gradient orthogonal decomposition, safety reward modeling, and theory-driven constrained optimization. Empirically, NSPO achieves zero accuracy loss across diverse downstream tasks—including mathematical reasoning, code generation, and instruction following—while attaining state-of-the-art safety performance. Notably, it reaches optimal safety–capability trade-off using only 40% of the PKU-SafeRLHF human preference annotations, demonstrating significant data efficiency. This work bridges theoretical rigor with practical efficacy in safe LLM alignment.

Technology Category

Application Category

📝 Abstract
As Large Language Models (LLMs) are increasingly deployed in real-world applications, it is important to ensure their behaviors align with human values, societal norms, and ethical principles. However, safety alignment under Reinforcement Learning (RL) often suffers from forgetting learned general abilities, which is also known as the alignment tax. To address this issue, we introduce Null-Space constrained Policy Optimization (NSPO), a novel RL framework for LLM safety alignment while preserving their core abilities. The safety policy gradients are geometrically projected into the null space of general tasks, thereby mitigating the safety alignment tax. In addition, we theoretically prove that NSPO preserves the model's original core capabilities, while still guaranteeing a descent direction for effective safety alignment. Extensive experiments demonstrate that NSPO outperforms existing methods by a large margin, achieving state-of-the-art safety performance without sacrificing accuracy on general tasks, including math, code, and instruction-following tasks. Notably, NSPO is data-efficient and only requires 40% of public human-annotated safety data from PKU-SafeRLHF to achieve promising safety performance, without a large amount of mixed general tasks data in existing alignment methods.
Problem

Research questions and friction points this paper is trying to address.

Mitigating safety alignment tax in LLMs
Preserving core abilities during safety alignment
Achieving safety without sacrificing general task accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Null-Space constrained Policy Optimization for LLM safety alignment
Projects safety gradients into null space of general tasks
Preserves core abilities while ensuring effective safety alignment
🔎 Similar Papers
No similar papers found.
Yifan Niu
Yifan Niu
PhD student, Hong Kong University of Science and Technology
Machine Learning
H
Han Xiao
The Hong Kong University of Science and Technology (Guangzhou)
D
Dongyi Liu
The Hong Kong University of Science and Technology (Guangzhou)
N
Nuo Chen
The Hong Kong University of Science and Technology (Guangzhou)
J
Jia Li
The Hong Kong University of Science and Technology (Guangzhou), The Hong Kong University of Science and Technology