🤖 AI Summary
This work investigates catastrophic forgetting in continual post-training (CPT), specifically comparing supervised fine-tuning (SFT) and reinforcement fine-tuning (RFT). We find that RFT inherently preserves knowledge, maintaining or even enhancing general capabilities (e.g., MMMU, MMLU-Pro) across multi-task continual learning. We attribute this advantage to implicit KL regularization emerging during policy optimization and propose a rollout-based instance filtering algorithm to improve RFT’s training stability and efficiency. To enable systematic evaluation, we introduce the first CPT benchmark tailored for multimodal tasks, integrating chain-of-thought reasoning and KL divergence analysis. Experiments on a seven-stage sequential task setup demonstrate that our RFT method matches the performance of full multi-task learning—without requiring memory replay or parameter isolation mechanisms.
📝 Abstract
Continual post-training (CPT) is a popular and effective technique for adapting foundation models like multimodal large language models to specific and ever-evolving downstream tasks. While existing research has primarily concentrated on methods like data replay, model expansion, or parameter regularization, the fundamental role of the learning paradigm within CPT remains largely unexplored. This paper presents a comparative analysis of two core post-training paradigms: supervised fine-tuning (SFT) and reinforcement fine-tuning (RFT), investigating their respective impacts on knowledge retention during CPT. Our experiments are conducted on a benchmark comprising seven diverse multimodal tasks, utilizing Qwen2.5-VL-7B-Instruct as the base model for continual post-training. The investigation yields two significant findings: (1) When continuously learning on downstream tasks, SFT leads to catastrophic forgetting of previously learned tasks. In contrast, RFT inherently preserves prior knowledge and achieve performance comparable to multi-task training. (2) RFT successfully protects and even enhances the model's general knowledge on standard benchmarks (e.g., MMMU and MMLU-Pro). Conversely, SFT degrades general model capabilities severely. Further analysis shows that explicit mechanisms, such as KL penalty and chain-of-thought reasoning, are not the primary factors. Instead, we find that the implicit regularization inherent to RFT is a key factor in mitigating forgetting. Finally, we propose a rollout-based instance filtering algorithm to improve the stability and efficiency of RFT. Our comprehensive study demonstrates the superiority of RFT as a robust paradigm for continual post-training.