🤖 AI Summary
To address insufficient regularization in large language model (LLM) self-play alignment—leading to over-optimization and instability of Nash equilibria—this paper proposes Regularized Self-Play Optimization (RSPO). RSPO explicitly incorporates KL-divergence-based regularization terms into the game-theoretic loss function, systematically revealing for the first time the complementary roles of forward and reverse KL divergence in controlling response length and improving win rates. We provide theoretical guarantees on Nash equilibrium convergence under RSPO. Evaluated via fine-tuning on Mistral-7B-Instruct and benchmarking on AlpacaEval-2, RSPO significantly improves the win rate over the SPPO baseline from 28.53% to 35.44%. Moreover, it enhances response diversity and achieves finer-grained control over response length, demonstrating both empirical efficacy and theoretical rigor in regularized self-play alignment.
📝 Abstract
Self-play alignment algorithms have been developed as effective methods for fine-tuning large language models (LLMs), formulating preference optimization as a two-player game. However, the regularization with respect to the reference policy, which is crucial for mitigating over-optimization, has been insufficiently investigated in self-play alignment. In this paper, we show that our regularization method can improve the unregularized self-play significantly. To study the impact of different regularizations in self-play alignment, we propose Regularized Self-Play Policy Optimization (RSPO). This generalized framework regularizes the self-play by simply adding a chosen regularization term into the loss while maintaining provable last-iterate convergence to the Nash Equilibrium of the corresponding regularized game. Surprisingly, empirical evaluations using the Mistral-7B-Instruct base model reveal that forward KL divergence regularization reduces response length in RSPO, whereas reverse KL divergence markedly improves raw win rates. RSPO with a linear combination of forward and reverse KL divergence regularization substantially increases the length-controlled win rate in AlpacaEval-2, elevating the unregularized self-play alignment method (SPPO) from $28.53%$ to $35.44%$. Finally, we show that RSPO also improves the response diversity.