🤖 AI Summary
This study investigates how the source of chain-of-thought (CoT) data used in supervised fine-tuning affects model generalization, uncovering a paradox wherein low training loss coincides with poor generalization. By comparing CoT trajectories generated by DeepSeek-R1-0528 and gpt-oss-120b, the work identifies two distinct reasoning patterns—convergent deductive and divergent multi-branch reasoning—as key drivers of generalization differences. Building on this insight, the authors propose a trajectory filtering strategy based on branch frequency, which yields consistent performance gains across five reasoning benchmarks, including AIME25 and BeyondAIME, improving average accuracy by 3.6% (up to 5.5%) and substantially enhancing the model’s reasoning generalization capability.
📝 Abstract
Supervised Fine-Tuning (SFT) on long Chain-of-Thought (CoT) trajectories has become a pivotal phase in building large reasoning models. However, how CoT trajectories from different sources influence the generalization performance of models remains an open question. In this paper, we conduct a comparative study using two sources of verified CoT trajectories generated by two competing models, \texttt{DeepSeek-R1-0528} and \texttt{gpt-oss-120b}, with their problem sets controlled to be identical. Despite their comparable performance, we uncover a striking paradox: lower training loss does not translate to better generalization. SFT on \texttt{DeepSeek-R1-0528} data achieves remarkably lower training loss, yet exhibits significantly worse generalization performance on reasoning benchmarks compared to those trained on \texttt{gpt-oss-120b}. To understand this paradox, we perform a multi-faceted analysis probing token-level SFT loss and step-level reasoning behaviors. Our analysis reveals a difference in reasoning patterns. \texttt{gpt-oss-120b} exhibits highly convergent and deductive trajectories, whereas \texttt{DeepSeek-R1-0528} favors a divergent and branch-heavy exploration pattern. Consequently, models trained with \texttt{DeepSeek-R1} data inherit inefficient exploration behaviors, often getting trapped in redundant exploratory branches that hinder them from reaching correct solutions. Building upon this insight, we propose a simple yet effective remedy of filtering out frequently branching trajectories to improve the generalization of SFT. Experiments show that training on selected \texttt{DeepSeek-R1-0528} subsets surprisingly improves reasoning performance by up to 5.1% on AIME25, 5.5% on BeyondAIME, and on average 3.6% on five benchmarks.