Reassessing the Role of Supervised Fine-Tuning: An Empirical Study in VLM Reasoning

📅 2025-12-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work re-evaluates the role of supervised fine-tuning (SFT) in post-hoc reasoning training of vision-language models (VLMs), challenging the prevailing “RL-outperforms-SFT” paradigm. Through rigorously controlled experiments using a unified data source—complemented by multi-scale evaluation, cross-modal generalization tests, and reward-accuracy correlation analysis—we systematically assess SFT’s efficacy across model scales, data volumes, and distribution shifts. Key findings are: (1) SFT exhibits superior robustness for weaker models, significantly improving reasoning reliability in small-scale VLMs; (2) it achieves performance comparable to 20K RL samples using only 2K annotated examples, demonstrating high data efficiency; (3) it enables stronger cross-modal transfer; and (4) it uncovers the pervasive “reward hacking” phenomenon in RL-based training—a previously undocumented issue in VLM reasoning. Collectively, these results advocate a new collaborative SFT–RL paradigm for post-training.

Technology Category

Application Category

📝 Abstract
Recent advances in vision-language models (VLMs) reasoning have been largely attributed to the rise of reinforcement Learning (RL), which has shifted the community's focus away from the supervised fine-tuning (SFT) paradigm. Many studies suggest that introducing the SFT stage not only fails to improve reasoning ability but may also negatively impact model training. In this study, we revisit this RL-centric belief through a systematic and controlled comparison of SFT and RL on VLM Reasoning. Using identical data sources, we find that the relative effectiveness of SFT and RL is conditional and strongly influenced by model capacity, data scale, and data distribution. Contrary to common assumptions, our findings show that SFT plays a crucial role across several scenarios: (1) Effectiveness for weaker models. SFT more reliably elicits reasoning capabilities in smaller or weaker VLMs. (2) Data efficiency. SFT with only 2K achieves comparable or better reasoning performance to RL with 20K. (3) Cross-modal transferability. SFT demonstrates stronger generalization across modalities. Moreover, we identify a pervasive issue of deceptive rewards, where higher rewards fail to correlate with better reasoning accuracy in RL. These results challenge the prevailing "RL over SFT" narrative. They highlight that the role of SFT may have been underestimated and support a more balanced post-training pipeline in which SFT and RL function as complementary components.
Problem

Research questions and friction points this paper is trying to address.

Reassessing SFT's role in VLM reasoning
Comparing SFT and RL effectiveness conditions
Challenging RL-centric belief with SFT benefits
Innovation

Methods, ideas, or system contributions that make the work stand out.

SFT enhances reasoning in weaker VLMs effectively
SFT achieves data efficiency with minimal training examples
SFT shows strong cross-modal generalization capabilities
Yongcan Yu
Yongcan Yu
Master Student, CASIA
Trustworthy AISafety in AI
L
Lingxiao He
Meituan
S
Shuo Lu
NLPR & MAIS, Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences
Lijun Sheng
Lijun Sheng
University of Science and Technology of China
computer visionmodel adaptation
Y
Yinuo Xu
School of Artificial Intelligence, University of Chinese Academy of Sciences
Y
Yanbo Wang
NLPR & MAIS, Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences
K
Kuangpu Guo
NLPR & MAIS, Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences
J
Jianjie Cheng
Meituan
M
Meng Wang
Meituan
Q
Qianlong Xie
Meituan
X
Xingxing Wang
Meituan
Dapeng Hu
Dapeng Hu
NLPR & MAIS, Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences
Jian Liang
Jian Liang
Kuaishou Inc.
transfer learninggraph learning