Skywork Open Reasoner 1 Technical Report

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient reasoning capability of large language models (LLMs) on long-chain-of-thought (CoT) tasks, this paper proposes a scalable reinforcement learning (RL) framework specifically optimized for extended CoT reasoning. Methodologically, it integrates Proximal Policy Optimization (PPO), multi-stage reward modeling, entropy regularization to mitigate entropy collapse, and CoT distillation augmented with large-scale synthetic reasoning data. Key contributions include: (i) the first demonstration of efficient, large-scale RL training for long-CoT models; (ii) open-sourcing of full-stack resources—including model weights, training code, and datasets; and (iii) the Skywork-OR1 series, which achieves state-of-the-art performance on AIME24/25 and LiveCodeBench, significantly outperforming DeepSeek-R1 and Qwen3-32B—yielding a +15.0% average accuracy gain for the 32B variant and establishing new SOTA for the 7B variant among models of comparable scale.

Technology Category

Application Category

📝 Abstract
The success of DeepSeek-R1 underscores the significant role of reinforcement learning (RL) in enhancing the reasoning capabilities of large language models (LLMs). In this work, we present Skywork-OR1, an effective and scalable RL implementation for long Chain-of-Thought (CoT) models. Building on the DeepSeek-R1-Distill model series, our RL approach achieves notable performance gains, increasing average accuracy across AIME24, AIME25, and LiveCodeBench from 57.8% to 72.8% (+15.0%) for the 32B model and from 43.6% to 57.5% (+13.9%) for the 7B model. Our Skywork-OR1-32B model surpasses both DeepSeek-R1 and Qwen3-32B on the AIME24 and AIME25 benchmarks, while achieving comparable results on LiveCodeBench. The Skywork-OR1-7B and Skywork-OR1-Math-7B models demonstrate competitive reasoning capabilities among models of similar size. We perform comprehensive ablation studies on the core components of our training pipeline to validate their effectiveness. Additionally, we thoroughly investigate the phenomenon of entropy collapse, identify key factors affecting entropy dynamics, and demonstrate that mitigating premature entropy collapse is critical for improved test performance. To support community research, we fully open-source our model weights, training code, and training datasets.
Problem

Research questions and friction points this paper is trying to address.

Enhancing reasoning in LLMs via RL for long CoT
Improving model accuracy on benchmarks like AIME24/25
Addressing entropy collapse to boost test performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning enhances Chain-of-Thought models
Mitigates entropy collapse for better performance
Open-sources model weights and training datasets
🔎 Similar Papers
No similar papers found.
J
Jujie He
Skywork AI, Kunlun Inc
Jiacai Liu
Jiacai Liu
Fudan University
reinforcement learning
Chris Liu
Chris Liu
Skywork AI, Kunlun Inc
R
Rui Yan
Skywork AI, Kunlun Inc
C
Chaojie Wang
Skywork AI, Kunlun Inc
P
Peng Cheng
Skywork AI, Kunlun Inc
X
Xiaoyu Zhang
Skywork AI, Kunlun Inc
Fuxiang Zhang
Fuxiang Zhang
Nanyang Technological University
Language ModelingReinforcement Learning
Jiacheng Xu
Jiacheng Xu
Nanyang Technological University
Reinforcement LearningLarge Language Model
W
Wei Shen
Skywork AI, Kunlun Inc
S
Siyuan Li
Skywork AI, Kunlun Inc
L
Liang Zeng
Skywork AI, Kunlun Inc
Tianwen Wei
Tianwen Wei
Unknown affiliation
Natural Language ProcessingArtificial IntelligenceMachine Learning
C
Cheng Cheng
Skywork AI, Kunlun Inc
B
Bo An
Skywork AI, Kunlun Inc
Y
Yang Liu
Skywork AI, Kunlun Inc
Y
Yahui Zhou
Skywork AI, Kunlun Inc