Mitigating Forgetting Between Supervised and Reinforcement Learning Yields Stronger Reasoners

📅 2025-10-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses three key challenges in joint supervised fine-tuning (SFT) and reinforcement learning (RL) for large language models: low data efficiency, strong algorithmic coupling, and catastrophic forgetting. To this end, we propose a plug-and-play dynamic fusion framework. It enhances SFT data efficiency via a high-challenge sample selection mechanism and mitigates RL-induced forgetting of SFT-acquired reasoning capabilities through high-entropy token-weighted loss and critical parameter freezing. The framework is agnostic to specific RL algorithms and supports general knowledge integration. Experiments demonstrate that our method achieves state-of-the-art reasoning performance using only 1.5% of the supervised data and 20.4% of the RL data required by prior best approaches—significantly improving both training efficiency and stability of joint SFT–RL optimization.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) show strong reasoning abilities, often amplified by Chain-of-Thought (CoT) prompting and reinforcement learning (RL). Although RL algorithms can substantially improve reasoning, they struggle to expand reasoning boundaries because they learn from their own reasoning trajectories rather than acquiring external knowledge. Supervised fine-tuning (SFT) offers complementary benefits but typically requires large-scale data and risks overfitting. Recent attempts to combine SFT and RL face three main challenges: data inefficiency, algorithm-specific designs, and catastrophic forgetting. We propose a plug-and-play framework that dynamically integrates SFT into RL by selecting challenging examples for SFT. This approach reduces SFT data requirements and remains agnostic to the choice of RL or SFT algorithm. To mitigate catastrophic forgetting of RL-acquired skills during SFT, we select high-entropy tokens for loss calculation and freeze parameters identified as critical for RL. Our method achieves state-of-the-art (SoTA) reasoning performance using only 1.5% of the SFT data and 20.4% of the RL data used by prior SoTA, providing an efficient and plug-and-play solution for combining SFT and RL in reasoning post-training.
Problem

Research questions and friction points this paper is trying to address.

Mitigating forgetting between supervised and reinforcement learning integration
Reducing data requirements for supervised fine-tuning in reasoning tasks
Developing plug-and-play framework for efficient SFT and RL combination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamically integrates SFT into RL
Selects challenging examples for SFT
Mitigates forgetting via entropy and freezing
🔎 Similar Papers
No similar papers found.
Xiangchi Yuan
Xiangchi Yuan
Georgia Institute of Technology
Representation Learning
X
Xiang Chen
Adobe Research
Tong Yu
Tong Yu
Adobe Research
D
Dachuan Shi
Georgia Institute of Technology
C
Can Jin
Rutgers University
W
Wenke Lee
Georgia Institute of Technology
Saayan Mitra
Saayan Mitra
Principal Research Scientist, Adobe