Bridging SFT and RL: Dynamic Policy Optimization for Robust Reasoning

📅 2026-04-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fundamental tension in large language model post-training between the high fitting bias of supervised fine-tuning (SFT) and the high gradient variance of reinforcement learning (RL). To reconcile this conflict, the authors propose DYPO, a novel framework that, for the first time, models the SFT–RL gradient discrepancy from a statistical perspective. DYPO reduces RL variance via group-aligned loss, corrects SFT bias through multi-teacher distillation, and introduces a dynamic exploration–exploitation gating mechanism that adaptively fuses both signals based on reward feedback. Theoretical analysis demonstrates that DYPO linearly reduces fitting bias while minimizing overall variance. Empirical results show that DYPO achieves average improvements of 4.8% on complex reasoning benchmarks and 13.3% on out-of-distribution tasks, substantially outperforming conventional sequential training pipelines.

Technology Category

Application Category

📝 Abstract
Post-training paradigms for Large Language Models (LLMs), primarily Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL), face a fundamental dilemma: SFT provides stability (low variance) but suffers from high fitting bias, while RL enables exploration (low bias) but grapples with high gradient variance. Existing unified optimization strategies often employ naive loss weighting, overlooking the statistical conflict between these distinct gradient signals. In this paper, we provide a rigorous theoretical analysis of this bias-variance trade-off and propose \textbf{DYPO} (Dynamic Policy Optimization), a unified framework designed to structurally mitigate this conflict. DYPO integrates three core components: (1) a \textit{Group Alignment Loss (GAL)} that leverages intrinsic group dynamics to significantly reduce RL gradient variance; (2) a \textit{Multi-Teacher Distillation} mechanism that corrects SFT fitting bias via diverse reasoning paths; and (3) a \textit{Dynamic Exploitation-Exploration Gating} mechanism that adaptively arbitrates between stable SFT and exploratory RL based on reward feedback. Theoretical analysis confirms that DYPO linearly reduces fitting bias and minimizes overall variance. Extensive experiments demonstrate that DYPO significantly outperforms traditional sequential pipelines, achieving an average improvement of 4.8\% on complex reasoning benchmarks and 13.3\% on out-of-distribution tasks. Our code is publicly available at https://github.com/Tocci-Zhu/DYPO.
Problem

Research questions and friction points this paper is trying to address.

Supervised Fine-Tuning
Reinforcement Learning
bias-variance trade-off
gradient variance
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Policy Optimization
Bias-Variance Trade-off
Group Alignment Loss
Multi-Teacher Distillation
Reinforcement Learning
🔎 Similar Papers