Long Is More Important Than Difficult for Training Reasoning Models

📅 2025-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the data scarcity bottleneck in training reasoning models, which stems from overreliance on high-difficulty problems. We propose a novel paradigm that prioritizes reasoning chain length—not problem difficulty—as the core optimization dimension. To this end, we introduce a controllable synthetic data generation method that produces reasoning traces with precisely specified lengths. Our first empirical finding reveals that reasoning length exerts a significantly stronger influence on model performance than problem difficulty, and we establish a log-linear scaling law between length and performance. Leveraging this insight, we design a length-decoupled training strategy: fine-tuning Qwen2.5-32B-Instruct on merely 1,000 length-controlled samples yields Long1K-32B, achieving 95.6% accuracy on MATH and 71.1% on GPQA—surpassing DeepSeek-R1-Distill-Qwen-32B. All code, datasets, and models are publicly released.

Technology Category

Application Category

📝 Abstract
Difficult problems, which often result in long reasoning traces, are widely recognized as key factors for enhancing the performance of reasoning models. However, such high-challenge problems are scarce, limiting the size of available datasets. In this paper, we propose a simple method to decouple the reliance on problem difficulty. First, we empirically demonstrate that reasoning length, rather than problem difficulty, primarily influences the performance of trained models. Second, we identify a scaling law on reasoning length, showing that model performance increases in a log-linear fashion as the reasoning data length grows. Finally, we introduce a straightforward technique to generate reasoning data of arbitrary length, and show that synthesized data is effective for training reasoning models. After fine-tuning the Qwen2.5-32B-Instruct language model on our Long1K dataset, we present our model, Long1K-32B, which achieves remarkable performance with only 1,000 training samples, achieving 95.6% accuracy on MATH, and 71.1% on GPQA outperforming DeepSeek-R1-Distill-Qwen-32B. The model, code, and dataset are all open-sourced, available at https://huggingface.co/ZTss/LONG1.
Problem

Research questions and friction points this paper is trying to address.

Decouples reliance on problem difficulty for training reasoning models
Identifies reasoning length as key factor for model performance
Generates synthetic long-reasoning data to enhance training effectiveness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decouples reliance on problem difficulty
Scaling law on reasoning length identified
Generates arbitrary-length reasoning data
🔎 Similar Papers
No similar papers found.
Si Shen
Si Shen
Hong Kong University of Science and Technology
Data MiningWeb Search
F
Fei Huang
Nanjing University of Science and Technology, Nanjing, Jiangsu, 210094, China
Z
Zhixiao Zhao
Nanjing Agricultural University, Nanjing, Jiangsu, 210095, China
C
Chang Liu
Nanjing Agricultural University, Nanjing, Jiangsu, 210095, China
T
Tiansheng Zheng
Nanjing Agricultural University, Nanjing, Jiangsu, 210095, China
D
Danhao Zhu
Criminal Science and Technology, Jiangsu Police Institute, Nanjing, Jiangsu, 210031, China