🤖 AI Summary
Small language models (SLMs) struggle to acquire complex reasoning capabilities when high-quality reasoning trajectories are scarce, primarily because expert demonstrations are too difficult to fit under the standard supervised fine-tuning (SFT) + reinforcement learning (RL) paradigm, leading to vanishing initial success rates. To address this, we propose a unified SFT-RL training framework centered on an expert-anchor-based branching sampling mechanism: short expert prompts are adaptively injected into failed trajectories to generate dense reward signals and enable curriculum-style progressive learning. Our method integrates an enhanced GRPO algorithm, expert-guided rollout, partial expert prefix injection, and theory-driven joint optimization. Experiments demonstrate that our approach achieves consistently superior reasoning performance over standard GRPO using fewer than 40% of real expert trajectories, accelerates training by approximately 3×, and—critically—resolves the complete failure mode of conventional SFT+RL for SLMs for the first time.
📝 Abstract
Small language models (SLMs) struggle to learn complex reasoning behaviors, especially when high-quality traces are scarce or difficult to learn from. The standard training approach combines a supervised fine-tuning (SFT) stage, often to distill capabilities of a larger model, followed by a reinforcement learning (RL)stage such as Group Relative Policy Optimization (GRPO). In this paper, we investigate the fundamental limitations of this SFT + RL paradigm and propose methods to overcome them. Under a suitable theoretical model, we demonstrate that the SFT + RL strategy can fail completely when (1) the expert's traces are too difficult for the small model to express, or (2) the small model's initialization has exponentially small likelihood of success. To address these, we introduce BREAD: a GRPO variant that unifies the SFT and RL stages via partial expert guidance and branched rollouts. When self-generated traces fail, BREAD adaptively inserts short expert prefixes/hints, allowing the small model to complete the rest of the reasoning path, and ensuring that each update includes at least one successful trace. This mechanism both densifies the reward signal and induces a natural learning curriculum. BREAD requires fewer than 40% of ground-truth traces, consistently outperforming standard GRPO while speeding up the training by about 3 times. Importantly, we demonstrate that BREAD helps the model solve problems that are otherwise unsolvable by the SFT + RL strategy, highlighting how branched rollouts and expert guidance can substantially boost SLM reasoning.