ViSurf: Visual Supervised-and-Reinforcement Fine-Tuning for Large Vision-and-Language Models

πŸ“… 2025-10-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the suboptimal performance of supervised fine-tuning (SFT) and the limited generalization capability of verifiable-reward reinforcement learning (RLVR) in post-training large vision-language models (LVLMs), this paper proposes ViSurfβ€”a unified post-training framework integrating SFT and RLVR. Its core innovation lies in explicitly injecting supervised signals into the RL sampling process, deriving a joint optimization objective, and designing a label-guided rollout mechanism alongside three dynamic reward modulation strategies to enable synergistic optimization between external supervision and internal reinforcement. Evaluated on multiple multimodal understanding and reasoning benchmarks, ViSurf consistently outperforms standalone SFT, standalone RLVR, and sequential two-stage approaches, achieving significant gains in both generalization and training stability.

Technology Category

Application Category

πŸ“ Abstract
Typical post-training paradigms for Large Vision-and-Language Models (LVLMs) include Supervised Fine-Tuning (SFT) and Reinforcement Learning with Verifiable Rewards (RLVR). SFT leverages external guidance to inject new knowledge, whereas RLVR utilizes internal reinforcement to enhance reasoning capabilities and overall performance. However, our analysis reveals that SFT often leads to sub-optimal performance, while RLVR struggles with tasks that exceed the model's internal knowledge base. To address these limitations, we propose ViSurf ( extbf{Vi}sual extbf{Su}pervised-and- extbf{R}einforcement extbf{F}ine-Tuning), a unified post-training paradigm that integrates the strengths of both SFT and RLVR within a single stage. We analyze the derivation of the SFT and RLVR objectives to establish the ViSurf objective, providing a unified perspective on these two paradigms. The core of ViSurf involves injecting ground-truth labels into the RLVR rollouts, thereby providing simultaneous external supervision and internal reinforcement. Furthermore, we introduce three novel reward control strategies to stabilize and optimize the training process. Extensive experiments across several diverse benchmarks demonstrate the effectiveness of ViSurf, outperforming both individual SFT, RLVR, and two-stage SFT extrightarrow RLVR. In-depth analysis corroborates these findings, validating the derivation and design principles of ViSurf.
Problem

Research questions and friction points this paper is trying to address.

Addresses limitations of supervised fine-tuning and reinforcement learning in vision-language models
Proposes unified training combining external supervision with internal reinforcement learning
Improves model performance on tasks beyond internal knowledge base
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified SFT and RLVR integration in single stage
Injecting ground-truth labels into RLVR rollouts
Introducing three novel reward control strategies