SPACE: Noise Contrastive Estimation Stabilizes Self-Play Fine-Tuning for Large Language Models

πŸ“… 2025-12-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing reward-difference-based self-play fine-tuning methods suffer from objective degradation and training instability, as they disregard the absolute reward values of both real and synthetic data. To address this, we propose SPACEβ€”the first LLM self-play fine-tuning framework incorporating Noise Contrastive Estimation (NCE). SPACE employs a binary discrimination architecture to jointly model absolute rewards for real and synthetic data, with theoretical guarantees of objective consistency and stable convergence. Its pipeline comprises self-play data generation, NCE-based loss modeling, and iterative optimization. Experiments across multiple tasks demonstrate that SPACE significantly outperforms supervised fine-tuning and state-of-the-art self-play baselines. Notably, it achieves superior performance with only a small number of real samples, while exhibiting exceptional training stability and consistent improvement throughout optimization.

Technology Category

Application Category

πŸ“ Abstract
Self-play fine-tuning has demonstrated promising abilities in adapting large language models (LLMs) to downstream tasks with limited real-world data. The basic principle is to iteratively refine the model with real samples and synthetic ones generated from itself. However, the existing methods primarily focus on the relative gaps between the rewards for two types of data, neglecting their absolute values. Through theoretical analysis, we identify that the gap-based methods suffer from unstable evolution, due to the potentially degenerated objectives. To address this limitation, we introduce a novel self-play fine-tuning method, namely Self-PlAy via Noise Contrastive Estimation (SPACE), which leverages noise contrastive estimation to capture the real-world data distribution. Specifically, SPACE treats synthetic samples as auxiliary components, and discriminates them from the real ones in a binary classification manner. As a result, SPACE independently optimizes the absolute reward values for each type of data, ensuring a consistently meaningful objective and thereby avoiding the instability issue. Theoretically, we show that the optimal solution of the objective in SPACE aligns with the underlying distribution of real-world data, and SPACE guarantees a provably stable convergence to the optimal distribution. Empirically, we show that SPACE significantly improves the performance of LLMs over various tasks, and outperforms supervised fine-tuning that employs much more real-world samples. Compared to gap-based self-play fine-tuning methods, SPACE exhibits remarkable superiority and stable evolution.
Problem

Research questions and friction points this paper is trying to address.

Stabilizes self-play fine-tuning for large language models
Addresses instability from degenerated objectives in gap-based methods
Ensures stable convergence to real-world data distribution
Innovation

Methods, ideas, or system contributions that make the work stand out.

SPACE uses noise contrastive estimation for distribution alignment
It treats synthetic samples as auxiliary in binary classification
Method independently optimizes absolute reward values for stability
πŸ”Ž Similar Papers
Y
Yibo Wang
National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
Qing-Guo Chen
Qing-Guo Chen
alibaba-inc
machine learning
Z
Zhao Xu
Alibaba International Digital Commerce
Weihua Luo
Weihua Luo
Alibaba
natural language processingmachine learningartificial intelligence
Kaifu Zhang
Kaifu Zhang
Assistant Professor of Marketing, Carnegie Mellon University
Two-sided marketsInternet platformse-commerce
L
Lijun Zhang
National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China