🤖 AI Summary
This work re-evaluates the role of supervised fine-tuning (SFT) in post-hoc reasoning training of vision-language models (VLMs), challenging the prevailing “RL-outperforms-SFT” paradigm. Through rigorously controlled experiments using a unified data source—complemented by multi-scale evaluation, cross-modal generalization tests, and reward-accuracy correlation analysis—we systematically assess SFT’s efficacy across model scales, data volumes, and distribution shifts. Key findings are: (1) SFT exhibits superior robustness for weaker models, significantly improving reasoning reliability in small-scale VLMs; (2) it achieves performance comparable to 20K RL samples using only 2K annotated examples, demonstrating high data efficiency; (3) it enables stronger cross-modal transfer; and (4) it uncovers the pervasive “reward hacking” phenomenon in RL-based training—a previously undocumented issue in VLM reasoning. Collectively, these results advocate a new collaborative SFT–RL paradigm for post-training.
📝 Abstract
Recent advances in vision-language models (VLMs) reasoning have been largely attributed to the rise of reinforcement Learning (RL), which has shifted the community's focus away from the supervised fine-tuning (SFT) paradigm. Many studies suggest that introducing the SFT stage not only fails to improve reasoning ability but may also negatively impact model training. In this study, we revisit this RL-centric belief through a systematic and controlled comparison of SFT and RL on VLM Reasoning. Using identical data sources, we find that the relative effectiveness of SFT and RL is conditional and strongly influenced by model capacity, data scale, and data distribution. Contrary to common assumptions, our findings show that SFT plays a crucial role across several scenarios: (1) Effectiveness for weaker models. SFT more reliably elicits reasoning capabilities in smaller or weaker VLMs. (2) Data efficiency. SFT with only 2K achieves comparable or better reasoning performance to RL with 20K. (3) Cross-modal transferability. SFT demonstrates stronger generalization across modalities. Moreover, we identify a pervasive issue of deceptive rewards, where higher rewards fail to correlate with better reasoning accuracy in RL. These results challenge the prevailing "RL over SFT" narrative. They highlight that the role of SFT may have been underestimated and support a more balanced post-training pipeline in which SFT and RL function as complementary components.