🤖 AI Summary
This work investigates the dynamic evolution of out-of-distribution (OOD) generalization during two-stage fine-tuning of large language models (LLMs). We identify a counterintuitive pattern: supervised fine-tuning (SFT) induces an early peak in OOD performance followed by rapid degradation (“OOD forgetting”), while reinforcement learning (RL) does not inherently enhance OOD generalization but instead selectively restores reasoning capabilities eroded during SFT—within a critical time window where both insufficient and excessive SFT impede effective RL recovery. Using singular value decomposition (SVD) of parameter matrices and targeted singular vector rotation interventions, we demonstrate for the first time that OOD forgetting and restoration fundamentally arise from misalignment and subsequent correction of key singular directions within a low-rank subspace. These findings challenge the conventional “SFT memorizes, RL generalizes” paradigm and establish a novel, interpretable framework for OOD capability evolution centered on singular vector rotation—providing theoretical foundations and actionable levers for controllable fine-tuning.
📝 Abstract
The two-stage fine-tuning paradigm of Supervised Fine-Tuning (SFT) followed by Reinforcement Learning (RL) has empirically shown better reasoning performance than one-stage SFT for the post-training of Large Language Models (LLMs). However, the evolution and mechanism behind the synergy of SFT and RL are still under-explored and inconclusive. In our study, we find the well-known claim "SFT memorizes, RL generalizes" is over-simplified, and discover that: (1) OOD performance peaks at the early stage of SFT and then declines (OOD forgetting), the best SFT checkpoint cannot be captured by training/test loss; (2) the subsequent RL stage does not generate fundamentally better OOD capability, instead it plays an extbf{OOD restoration} role, recovering the lost reasoning ability during SFT; (3) The recovery ability has boundaries, ie{} extbf{if SFT trains for too short or too long, RL cannot recover the lost OOD ability;} (4) To uncover the underlying mechanisms behind the forgetting and restoration process, we employ SVD analysis on parameter matrices, manually edit them, and observe their impacts on model performance. Unlike the common belief that the shift of model capacity mainly results from the changes of singular values, we find that they are actually quite stable throughout fine-tuning. Instead, the OOD behavior strongly correlates with the extbf{rotation of singular vectors}. Our findings re-identify the roles of SFT and RL in the two-stage fine-tuning and discover the rotation of singular vectors as the key mechanism. %reversing the rotations induced by SFT, which shows recovery from forgetting, whereas imposing the SFT parameter directions onto a RL-tuned model results in performance degradation. Code is available at https://github.com/xiaodanguoguo/RL_Heals_SFT