Towards On-Policy SFT: Distribution Discriminant Theory and its Applications in LLM Training

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Although supervised fine-tuning (SFT) is computationally efficient, its generalization capability is limited by the absence of in-policy data. This work proposes a novel SFT framework grounded in Distribution Discrimination Theory (DDT), which explicitly steers the model toward the in-policy distribution through both the loss function and data alignment. Specifically, the approach integrates an IDFT loss and a Hinted Decoding-based data realignment technique to guide learning at both optimization and data levels. The resulting method preserves the training efficiency of standard SFT while substantially improving generalization performance—achieving results comparable to offline reinforcement learning algorithms such as DPO and SimPO. This offers a practical and efficient alternative for scenarios where deploying reinforcement learning is infeasible.

Technology Category

Application Category

📝 Abstract
Supervised fine-tuning (SFT) is computationally efficient but often yields inferior generalization compared to reinforcement learning (RL). This gap is primarily driven by RL's use of on-policy data. We propose a framework to bridge this chasm by enabling On-Policy SFT. We first present \textbf{\textit{Distribution Discriminant Theory (DDT)}}, which explains and quantifies the alignment between data and the model-induced distribution. Leveraging DDT, we introduce two complementary techniques: (i) \textbf{\textit{In-Distribution Finetuning (IDFT)}}, a loss-level method to enhance generalization ability of SFT, and (ii) \textbf{\textit{Hinted Decoding}}, a data-level technique that can re-align the training corpus to the model's distribution. Extensive experiments demonstrate that our framework achieves generalization performance on par with prominent offline RL algorithms, including DPO and SimPO, while maintaining the efficiency of an SFT pipeline. The proposed framework thus offers a practical alternative in domains where RL is infeasible. We open-source the code here: https://github.com/zhangmiaosen2000/Towards-On-Policy-SFT
Problem

Research questions and friction points this paper is trying to address.

Supervised Fine-Tuning
On-Policy Learning
Generalization Gap
Distribution Alignment
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distribution Discriminant Theory
On-Policy SFT
In-Distribution Finetuning
Hinted Decoding
Supervised Fine-Tuning
🔎 Similar Papers
No similar papers found.
M
Miaosen Zhang
Department of Computer Science, Southeast University, Nanjing, China; Microsoft Research Asia, Beijing, China
Y
Yishan Liu
Department of Computer Science, Southeast University, Nanjing, China; Shopee, Shanghai, China
S
Shuxia Lin
Department of Computer Science, Southeast University, Nanjing, China
X
Xu Yang
Department of Computer Science, Southeast University, Nanjing, China
Qi Dai
Qi Dai
Microsoft Research
MultimediaComputer Vision
Chong Luo
Chong Luo
Microsoft Research
multimedia communicationscomputer vision
W
Weihao Jiang
Shopee, Shanghai, China
P
Peng Hou
Shopee, Shanghai, China
A
Anxiang Zeng
Shopee, Shanghai, China
Xin Geng
Xin Geng
School of Computer Science and Engineering, Southeast University
Artificial IntelligencePattern RecognitionMachine Learning
Baining Guo
Baining Guo
Distinguished Scientist, Microsoft Research
Computer GraphicsGraphicsVirtual RealityGeometric Modeling