🤖 AI Summary
This study investigates the key mechanisms enabling efficient acquisition of long-chain-of-thought (Long CoT) reasoning capabilities in large language models (LLMs). Addressing the lack of clarity regarding optimal training data and technical pathways, we establish for the first time that the *logical structure*—not the factual correctness—of Long CoT demonstrations serves as the primary signal driving reasoning competence acquisition; content accuracy exerts only marginal influence—a novel insight termed “structural sensitivity.” Methodologically, we employ supervised fine-tuning (SFT) with low-rank adaptation (LoRA), fine-tuning Qwen2.5-32B-Instruct on merely 17K high-quality Long CoT samples. Experimental results show substantial gains: +40.0 percentage points to 56.7% on AIME 2024 and +8.1 points to 57.0% on LiveCodeBench—matching o1-preview’s performance while drastically reducing data and computational requirements.
📝 Abstract
Large reasoning models (LRMs) tackle complex reasoning problems by following long chain-of-thoughts (Long CoT) that incorporate reflection, backtracking, and self-validation. However, the training techniques and data requirements to elicit Long CoT remain poorly understood. In this work, we find that a Large Language model (LLM) can effectively learn Long CoT reasoning through data-efficient supervised fine-tuning (SFT) and parameter-efficient low-rank adaptation (LoRA). With just 17k long CoT training samples, the Qwen2.5-32B-Instruct model achieves significant improvements on a wide range of math and coding benchmarks, including 56.7% (+40.0%) on AIME 2024 and 57.0% (+8.1%) on LiveCodeBench, competitive to the proprietary o1-preview model's score of 44.6% and 59.1%. More importantly, we find that the structure of Long CoT is critical to the learning process, whereas the content of individual reasoning steps has minimal impact. Perturbations affecting content, such as training on incorrect samples or removing reasoning keywords, have little impact on performance. In contrast, structural modifications that disrupt logical consistency in the Long CoT, such as shuffling or deleting reasoning steps, significantly degrade accuracy. For example, a model trained on Long CoT samples with incorrect answers still achieves only 3.2% lower accuracy compared to training with fully correct samples. These insights deepen our understanding of how to elicit reasoning capabilities in LLMs and highlight key considerations for efficiently training the next generation of reasoning models. This is the academic paper of our previous released Sky-T1-32B-Preview model. Codes are available at https://github.com/NovaSky-AI/SkyThought.