Proximal Supervised Fine-Tuning

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Supervised fine-tuning (SFT) often degrades the generalization capability of foundation models. To address this, we propose proximal SFT—a novel method that formally models SFT as a special case of policy gradient optimization and introduces a trust-region constraint to explicitly limit policy drift. This yields a theoretically grounded proximal objective with rigorous stability guarantees. Our approach employs a constant positive advantage estimate to enforce soft update constraints, ensuring controlled adaptation. Empirically, proximal SFT matches standard SFT’s in-domain performance on mathematical reasoning and value alignment tasks while significantly improving cross-domain generalization. It also prevents entropy collapse during long-horizon training and provides more robust, stable initialization for subsequent alignment or reinforcement learning stages. The core contribution is the first proximal fine-tuning framework that simultaneously ensures optimization feasibility and training stability—bridging theoretical rigor with practical efficacy in instruction tuning.

Technology Category

Application Category

📝 Abstract
Supervised fine-tuning (SFT) of foundation models often leads to poor generalization, where prior capabilities deteriorate after tuning on new tasks or domains. Inspired by trust-region policy optimization (TRPO) and proximal policy optimization (PPO) in reinforcement learning (RL), we propose Proximal SFT (PSFT). This fine-tuning objective incorporates the benefits of trust-region, effectively constraining policy drift during SFT while maintaining competitive tuning. By viewing SFT as a special case of policy gradient methods with constant positive advantages, we derive PSFT that stabilizes optimization and leads to generalization, while leaving room for further optimization in subsequent post-training stages. Experiments across mathematical and human-value domains show that PSFT matches SFT in-domain, outperforms it in out-of-domain generalization, remains stable under prolonged training without causing entropy collapse, and provides a stronger foundation for the subsequent optimization.
Problem

Research questions and friction points this paper is trying to address.

Addresses poor generalization in supervised fine-tuning of foundation models
Proposes proximal SFT to constrain policy drift during optimization
Enhances out-of-domain generalization while maintaining in-domain performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proximal SFT incorporates trust-region constraints
PSFT stabilizes optimization and improves generalization
Prevents policy drift while maintaining competitive tuning
🔎 Similar Papers
No similar papers found.