๐ค AI Summary
In post-training large language models, supervised fine-tuning (SFT) suffers from overfitting, reinforcement learning (RL) achieves strong generalization but incurs high computational cost, and dynamic fine-tuning (DFT) offers a compromise yet exhibits training drift and poor stability due to the absence of distributional anchoring. Method: We propose Anchored Supervised Fine-Tuning (ASFT), which integrates token-level reward-weighted regression and lightweight KL-divergence regularization into the DFT framework. Theoretically, ASFT characterizes and mitigates distributional drift, yielding a tight RL performance bound. Contribution/Results: ASFT simultaneously achieves efficient imitation learning and robust generalization. Empirical evaluation on mathematical reasoning, medical knowledge alignment, and code generation demonstrates significant improvements over both SFT and DFTโachieving higher accuracy with negligible computational overhead. These results validate the effectiveness of theory-driven algorithm design.
๐ Abstract
Post-training of large language models involves a fundamental trade-off between supervised fine-tuning (SFT), which efficiently mimics demonstrations but tends to memorize, and reinforcement learning (RL), which achieves better generalization at higher computational cost. Dynamic Fine-Tuning (DFT) recently emerged as a promising middle ground, reweighting SFT objectives with token probabilities and achieving improvements in certain reasoning domains, though it exhibits instability in other tasks. We provide a analysis of DFT through the reward-weighted regression (RWR) framework, revealing that it corresponds to a specific auxiliary distribution choice that yields provably tighter RL bounds than standard SFT. However, our analysis also uncovers a critical limitation: this construction lacks distributional anchoring, leading to progressive drift that undermines training stability. To address this, we propose Anchored Supervised Fine-Tuning (ASFT), which augments DFT's reweighting with lightweight KL regularization to preserve tightness while ensuring stability. Empirically, ASFT consistently outperforms both SFT and DFT across mathematical reasoning, medical knowledge grounding, and code generation, achieving substantial improvements with minimal computational overhead. Our RWR framework provides a systematic lens for understanding post-training methods and demonstrates that principled theoretical analysis leads to both stronger guarantees and practical gains.