🤖 AI Summary
This study investigates the trade-off between prior knowledge retention and task adaptation in multimodal large language models (MLLMs), comparing supervised fine-tuning (SFT) and reinforcement fine-tuning (RFT). To systematically evaluate stability and forgetting, we propose a novel downstream “puzzle task.” Experiments reveal that although RFT converges more slowly, it significantly mitigates catastrophic forgetting; crucially, its rollout sampling trajectories implicitly encode knowledge retention signals. Leveraging this insight, we design a trajectory-guided SFT strategy, achieving synergistic optimization of rapid adaptation and knowledge preservation on Qwen2.5-VL. A key finding is that data distribution shift—not parameter update magnitude—is the primary driver of forgetting, and RFT inherently alleviates this shift. Our work establishes an interpretable, transferable fine-tuning paradigm for MLLM continual learning, bridging theoretical insight with practical efficacy.
📝 Abstract
Post-training algorithms such as Supervised Fine-Tuning (SFT) and Reinforcement Fine-Tuning (RFT) are widely used to adapt multimodal large language models to downstream tasks. While effective at task adaptation, their impact on prior knowledge remains unclear. In this paper, we introduce jigsaw puzzles as a novel task absent from existing pretraining corpora and systematically study the behavior of SFT and RFT on an open-source multimodal model, Qwen2.5-VL. Our experiments reveal a sharp trade-off: SFT enables rapid task acquisition but leads to catastrophic forgetting, whereas RFT learns more slowly on novel tasks but maintains prior knowledge. We analyze this phenomenon through the lens of learning dynamics, showing that RFT reinforces correct samples that are naturally aligned with the base model's probability landscape, mitigating interference with prior knowledge. Moreover, supervised training on correct RFT-simulated rollouts allows SFT to preserve knowledge while rapidly learning new tasks. These findings suggest that data distribution, rather than algorithmic differences, plays a central role in forgetting, and highlight RFT's potential for stable continual learning in multimodal large language models.