🤖 AI Summary
This work addresses key limitations in existing self-play fine-tuning methods such as SPIN, which suffer from decaying reward advantages and a reliance on reference policies that misalign training and generation objectives. To overcome these issues, the authors propose T-SPIN, a novel reference-free framework that integrates historical advantage estimation and entropy regularization into a triplet-based contrastive learning paradigm. By leveraging historical policy rollouts to stabilize advantage signals and incorporating entropy-aware exploration, T-SPIN enhances both training stability and policy diversity. Experimental results demonstrate that T-SPIN consistently outperforms SPIN across multiple benchmarks, achieving more stable convergence and matching or exceeding the performance of full supervised fine-tuning with only 25% of the labeled data.
📝 Abstract
Recently, self-play fine-tuning (SPIN) has been proposed to adapt large language models to downstream applications with scarce expert-annotated data, by iteratively generating synthetic responses from the model itself. However, SPIN is designed to optimize the current reward advantages of annotated responses over synthetic responses at hand, which may gradually vanish during iterations, leading to unstable optimization. Moreover, the utilization of reference policy induces a misalignment issue between the reward formulation for training and the metric for generation. To address these limitations, we propose a novel Triplet-based Self-Play fIne-tuNing (T-SPIN) method that integrates two key designs. First, beyond current advantages, T-SPIN additionally incorporates historical advantages between iteratively generated responses and proto-synthetic responses produced by the initial policy. Even if the current advantages diminish, historical advantages remain effective, stabilizing the overall optimization. Second, T-SPIN introduces the entropy constraint into the self-play framework, which is theoretically justified to support reference-free fine-tuning, eliminating the training-generation discrepancy. Empirical results on various tasks demonstrate not only the superior performance of T-SPIN over SPIN, but also its stable evolution during iterations. Remarkably, compared to supervised fine-tuning, T-SPIN achieves comparable or even better performance with only 25% samples, highlighting its effectiveness when faced with scarce annotated data.