🤖 AI Summary
Existing RLHF methods (e.g., PPO, DPO) rely on task-specific optimization, lack inference-time self-correction capability, and exhibit poor generalization to out-of-distribution (OOD) tasks. To address these limitations, we propose Self-correcting Preference Optimization (SRPO), a task-agnostic framework that formulates preference learning as a joint min-max self-improvement process between policy and generator. Crucially, SRPO is the first method to equivalently reformulate this objective into a scalable, supervised offline loss—eliminating the need for reward modeling, online interaction, or adversarial training. This enables fully offline, task-agnostic, and robust alignment. Evaluated on the OOD XSUM benchmark, SRPO achieves a 90% AI-win rate after five self-correction rounds—outperforming DPO by 15 percentage points—and demonstrates significantly enhanced cross-task robustness.
📝 Abstract
Both online and offline RLHF methods such as PPO and DPO have been extremely successful in aligning AI with human preferences. Despite their success, the existing methods suffer from a fundamental problem that their optimal solution is highly task-dependent (i.e., not robust to out-of-distribution (OOD) tasks). Here we address this challenge by proposing Self-Improving Robust Preference Optimization SRPO, a practical and mathematically principled offline RLHF framework that is completely robust to the changes in the task. The key idea of SRPO is to cast the problem of learning from human preferences as a self-improvement process, which can be mathematically expressed in terms of a min-max objective that aims at joint optimization of self-improvement policy and the generative policy in an adversarial fashion. The solution for this optimization problem is independent of the training task and thus it is robust to its changes. We then show that this objective can be re-expressed in the form of a non-adversarial offline loss which can be optimized using standard supervised optimization techniques at scale without any need for reward model and online inference. We show the effectiveness of SRPO in terms of AI Win-Rate (WR) against human (GOLD) completions. In particular, when SRPO is evaluated on the OOD XSUM dataset, it outperforms the celebrated DPO by a clear margin of 15% after 5 self-revisions, achieving WR of 90%.