π€ AI Summary
This study systematically investigates the differential impacts of supervised fine-tuning (SFT) and reinforcement learning (RL) on *in-distribution task acquisition* (e.g., word games, image captioning) and *out-of-distribution (OOD) generalization* during post-training of foundation models. We introduce two novel benchmarks: GeneralPoints, a self-constructed arithmetic reasoning card game, and V-IRL, a realistic visual navigation environment. Using PPO-based RL algorithms and multimodal evaluation protocols, we conduct controlled comparisons against SFT baselines. Our key findings are: (1) outcome-based RL significantly improves OOD generalization and concurrently enhances low-level visual perception capabilities; (2) SFT exhibits strong format consistency but suffers from overfitting and poor generalization, serving as a necessary prerequisite for stable RL training; and (3) RL and SFT exhibit complementary trade-offs between memorization and learningβRL acquires transferable, compositional knowledge, whereas SFT grounds model behavior in structured output scaffolding. This work establishes the first empirical evidence that RL can induce generalizable, cross-domain cognitive capabilities beyond task-specific memorization.
π Abstract
Supervised fine-tuning (SFT) and reinforcement learning (RL) are widely used post-training techniques for foundation models. However, their roles in enhancing model generalization capabilities remain unclear. This paper studies the difference between SFT and RL on generalization and memorization, focusing on text-based rule variants and visual variants. We introduce GeneralPoints, an arithmetic reasoning card game, and adopt V-IRL, a real-world navigation environment, to assess how models trained with SFT and RL generalize to unseen variants in both textual and visual domains. We show that RL, especially when trained with an outcome-based reward, generalizes across both rule-based textual and visual variants. SFT, in contrast, tends to memorize training data and struggles to generalize out-of-distribution scenarios. Further analysis reveals that RL improves the model's underlying visual recognition capabilities, contributing to its enhanced generalization in the visual domain. Despite RL's superior generalization, we show that SFT remains essential for effective RL training; SFT stabilizes the model's output format, enabling subsequent RL to achieve its performance gains. These findings demonstrates the capability of RL for acquiring generalizable knowledge in complex, multi-modal tasks.