🤖 AI Summary
This paper addresses the safety-constrained alignment problem for large language models (LLMs)—i.e., maximizing output quality while strictly bounding harmful content risk below a pre-specified threshold. We propose Primal-Dual Direct Preference Optimization (PD-DPO), a training-free method that eliminates the need for a separately trained reward model. Our key innovation is a reformulated Lagrangian DPO objective that jointly optimizes preference learning and safety cost constraints, enabling computationally efficient, theoretically grounded safe alignment. PD-DPO supports online data expansion and requires neither prior safety knowledge nor exhaustive human annotations. Evaluated on the PKU-SafeRLHF benchmark, PD-DPO achieves substantial reductions in computational overhead while maintaining competitive performance and strict adherence to safety constraints—demonstrating robust trade-off management between utility and safety.
📝 Abstract
The widespread application of Large Language Models (LLMs) imposes increasing demands on safety, such as reducing harmful content and fake information, and avoiding certain forbidden tokens due to rules and laws. While there have been several recent works studying safe alignment of LLMs, these works either require the training of reward and cost models and incur high memory and computational costs, or need prior knowledge about the optimal solution. Motivated by this fact, we study the problem of constrained alignment in LLMs, i.e., maximizing the output reward while restricting the cost due to potentially unsafe content to stay below a threshold. For this problem, we propose a novel primal-dual DPO approach, which first trains a model using standard DPO on reward preference data to provide reward information, and then adopts a rearranged Lagrangian DPO objective utilizing the provided reward information to fine-tune LLMs on cost preference data. Our approach significantly reduces memory and computational costs, and does not require extra prior knowledge. Moreover, we establish rigorous theoretical guarantees on the suboptimality and constraint violation of the output policy. We also extend our approach to an online data setting by incorporating exploration bonuses, which enables our approach to explore uncovered prompt-response space, and then provide theoretical results that get rid of the dependence on preference data coverage. Experimental results on the widely-used preference dataset PKU-SafeRLHF demonstrate the effectiveness of our approach.