🤖 AI Summary
This paper addresses the optimal alignment of large language models (LLMs) under multiple objectives: maximizing a primary reward while strictly satisfying secondary utility constraints. To overcome the poor convergence of existing Lagrangian methods and the suboptimality—due to parameterization limitations—of non-iterative dual approaches, we theoretically establish, for the first time, that an alternating optimization algorithm based on Lagrangian duality converges to a constraint-optimal policy (up to parameterization error), and we quantify both the optimality gap in objective value and the constraint violation gap. Our method jointly optimizes the LLM policy (via Lagrangian maximization) and the dual variables (via gradient descent). Empirical evaluation on the PKU-SafeRLHF dataset demonstrates significant improvements in the joint Pareto optimality of constraint satisfaction rate and primary-task performance.
📝 Abstract
We study the problem of computing an optimal large language model (LLM) policy for a constrained alignment problem, where the goal is to maximize a primary reward objective while satisfying constraints on secondary utilities. Despite the popularity of Lagrangian-based LLM policy search in constrained alignment, iterative primal-dual methods often fail to converge, and non-iterative dual-based methods do not achieve optimality in the LLM parameter space. To address these challenges, we employ Lagrangian duality to develop an iterative dual-based alignment method that alternates between updating the LLM policy via Lagrangian maximization and updating the dual variable via dual descent. In theory, we characterize the primal-dual gap between the primal value in the distribution space and the dual value in the LLM parameter space. We further quantify the optimality gap of the learned LLM policies at near-optimal dual variables with respect to both the objective and the constraint functions. These results prove that dual-based alignment methods can find an optimal constrained LLM policy, up to an LLM parametrization gap. We demonstrate the effectiveness and merits of our approach through extensive experiments conducted on the PKU-SafeRLHF dataset.