The Valley of Code Reasoning: Scaling Knowledge Distillation of Large Language Models

πŸ“… 2025-10-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study investigates how dataset scale affects the code reasoning capability of small models in knowledge distillation, specifically when distilling chain-of-thought reasoning from large language models (LLMs) into compact non-reasoning models. Method: We employ a multi-stage fine-tuning strategy and conduct systematic evaluation on competitive programming benchmarks, varying both dataset size and problem difficulty. Contribution/Results: We first identify and empirically validate the β€œcode reasoning valley”—a non-monotonic performance trend where small-model accuracy initially degrades and then improves with increasing distillation data volume. Furthermore, we demonstrate that output correctness is not a necessary condition for effective distillation, challenging the conventional assumption of strong dependence on high-quality teacher labels. Our analysis characterizes the nonlinear scaling law of code reasoning capability transfer: simple problems at low-to-moderate data scales prove more effective for eliciting reasoning ability, thereby deepening the understanding of distillation dynamics in code reasoning tasks.

Technology Category

Application Category

πŸ“ Abstract
Distilling the thinking traces of a Large Language Model (LLM) with reasoning capabilities into a smaller model has been proven effective. Yet, there is a scarcity of work done on how model performances scale with the quantity of distillation data. In this work, we study the scaling trend of distilling competitive coding skills on two small non-reasoning LLMs. We validate the hypothesis that there is a $ extit{valley of code reasoning}$: downstream performance on competitive coding first drops as data quantity increases, then it steadily increases in a sharper-than-log-linear fashion. Having identified the trend, we further fine-tune the models at two different distillation stages on the same data to ground conclusions on their respective learning phases. We learn that across stages in the low and medium-low data regimes, small models benefit significantly from easier coding questions than from harder ones. We also find that, surprisingly, the correctness of outputs in training data makes no difference to distillation outcomes. Our work represents a step forward in understanding the training dynamics of code reasoning distillation outside intuition
Problem

Research questions and friction points this paper is trying to address.

Investigating how code reasoning distillation scales with data quantity
Identifying performance valley patterns during knowledge transfer process
Analyzing training dynamics of small models on coding problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scaling knowledge distillation with reasoning traces
Identifying valley-shaped performance trend in coding
Fine-tuning models at different distillation stages
πŸ”Ž Similar Papers
No similar papers found.