Reinforcement Fine-Tuning Enables MLLMs Learning Novel Tasks Stably

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the trade-off between prior knowledge retention and task adaptation in multimodal large language models (MLLMs), comparing supervised fine-tuning (SFT) and reinforcement fine-tuning (RFT). To systematically evaluate stability and forgetting, we propose a novel downstream “puzzle task.” Experiments reveal that although RFT converges more slowly, it significantly mitigates catastrophic forgetting; crucially, its rollout sampling trajectories implicitly encode knowledge retention signals. Leveraging this insight, we design a trajectory-guided SFT strategy, achieving synergistic optimization of rapid adaptation and knowledge preservation on Qwen2.5-VL. A key finding is that data distribution shift—not parameter update magnitude—is the primary driver of forgetting, and RFT inherently alleviates this shift. Our work establishes an interpretable, transferable fine-tuning paradigm for MLLM continual learning, bridging theoretical insight with practical efficacy.

Technology Category

Application Category

📝 Abstract
Post-training algorithms such as Supervised Fine-Tuning (SFT) and Reinforcement Fine-Tuning (RFT) are widely used to adapt multimodal large language models to downstream tasks. While effective at task adaptation, their impact on prior knowledge remains unclear. In this paper, we introduce jigsaw puzzles as a novel task absent from existing pretraining corpora and systematically study the behavior of SFT and RFT on an open-source multimodal model, Qwen2.5-VL. Our experiments reveal a sharp trade-off: SFT enables rapid task acquisition but leads to catastrophic forgetting, whereas RFT learns more slowly on novel tasks but maintains prior knowledge. We analyze this phenomenon through the lens of learning dynamics, showing that RFT reinforces correct samples that are naturally aligned with the base model's probability landscape, mitigating interference with prior knowledge. Moreover, supervised training on correct RFT-simulated rollouts allows SFT to preserve knowledge while rapidly learning new tasks. These findings suggest that data distribution, rather than algorithmic differences, plays a central role in forgetting, and highlight RFT's potential for stable continual learning in multimodal large language models.
Problem

Research questions and friction points this paper is trying to address.

Study trade-off between task learning and knowledge retention in MLLMs
Compare SFT and RFT impacts on novel task adaptation
Analyze data distribution role in catastrophic forgetting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Fine-Tuning stabilizes novel task learning
RFT mitigates forgetting by aligning with base model
Supervised training on RFT rollouts preserves prior knowledge
Z
Zhihao Zhang
Fudan University
Qiaole Dong
Qiaole Dong
Fudan University
Computer Vision
Q
Qi Zhang
Fudan University, Shanghai Artificial Intelligence Laboratory
J
Jun Zhao
Fudan University
E
Enyu Zhou
Fudan University
Zhiheng Xi
Zhiheng Xi
Fudan University
LLM ReasoningLLM-based Agents
Senjie Jin
Senjie Jin
Fudan University
natural language processing
Xiaoran Fan
Xiaoran Fan
Fudan University
Y
Yuhao Zhou
Fudan University
Yanwei Fu
Yanwei Fu
Fudan University
Computer visionmachine learningMultimedia
Tao Ji
Tao Ji
中国人民大学
T
Tao Gui
Fudan University
X
Xuanjing Huang
Fudan University, Shanghai Artificial Intelligence Laboratory