Towards Large Reasoning Models: A Survey on Scaling LLM Reasoning Capabilities

📅 2025-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) often struggle with complex problem-solving requiring rigorous analysis, self-reflection, and accurate multi-step reasoning. Method: This work introduces the “Large Reasoning Model” paradigm, proposing a dual-scale scalable reasoning framework: during training, it employs reinforcement learning and automated synthetic reasoning data generation to autonomously construct chain-of-thought (CoT) trajectories; during inference, it integrates autoregressive long-thinking decoding, tree search, and reflection-based modeling to dynamically expand reasoning depth. Contribution/Results: It is the first systematic effort to delineate the core technical pathways for scaling LLM reasoning capabilities—unifying CoT prompting, training-time reasoning learning, and test-time reasoning expansion. Experiments demonstrate substantial improvements in reasoning accuracy across diverse benchmarks. The framework provides both theoretical foundations and practical design principles for next-generation reasoning-optimized architectures, including the o1 series.

Technology Category

Application Category

📝 Abstract
Language has long been conceived as an essential tool for human reasoning. The breakthrough of Large Language Models (LLMs) has sparked significant research interest in leveraging these models to tackle complex reasoning tasks. Researchers have moved beyond simple autoregressive token generation by introducing the concept of"thought"-- a sequence of tokens representing intermediate steps in the reasoning process. This innovative paradigm enables LLMs' to mimic complex human reasoning processes, such as tree search and reflective thinking. Recently, an emerging trend of learning to reason has applied reinforcement learning (RL) to train LLMs to master reasoning processes. This approach enables the automatic generation of high-quality reasoning trajectories through trial-and-error search algorithms, significantly expanding LLMs' reasoning capacity by providing substantially more training data. Furthermore, recent studies demonstrate that encouraging LLMs to"think"with more tokens during test-time inference can further significantly boost reasoning accuracy. Therefore, the train-time and test-time scaling combined to show a new research frontier -- a path toward Large Reasoning Model. The introduction of OpenAI's o1 series marks a significant milestone in this research direction. In this survey, we present a comprehensive review of recent progress in LLM reasoning. We begin by introducing the foundational background of LLMs and then explore the key technical components driving the development of large reasoning models, with a focus on automated data construction, learning-to-reason techniques, and test-time scaling. We also analyze popular open-source projects at building large reasoning models, and conclude with open challenges and future research directions.
Problem

Research questions and friction points this paper is trying to address.

Enhancing Cognitive Abilities
Large Language Models
Problem Solving
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning
Enhanced Cognitive Abilities
Optimized Problem Solving Strategies
Fengli Xu
Fengli Xu
Tsinghua University
LLM AgentData ScienceSocial ComputingScience of ScienceUrban Science
Qianyue Hao
Qianyue Hao
PhD Student, Department of Electronic Engineering, Tsinghua University
Reinforcement LearningLarge Language Models
Zefang Zong
Zefang Zong
Department of Electronic Engineering, Tsinghua University
Data MiningAI
J
Jingwei Wang
Tsinghua University, Beijing, China
Y
Yunke Zhang
Tsinghua University, Beijing, China
J
Jingyi Wang
Tsinghua University, Beijing, China
Xiaochong Lan
Xiaochong Lan
Tsinghua University
Large Language ModelsLLM Agent
Jiahui Gong
Jiahui Gong
Tsinghua University
Machine LearningSpatial Temporal PredictionRecommender System
Tianjian Ouyang
Tianjian Ouyang
Tsinghua University
F
Fanjin Meng
Tsinghua University, Beijing, China
Chenyang Shao
Chenyang Shao
PhD student, EE, Tsinghua University
Large Language ModelLLM AgentRL
Y
Yuwei Yan
HKUST (GZ), Guangzhou, China
Q
Qinglong Yang
Tsinghua University, Beijing, China
Y
Yiwen Song
Tsinghua University, Beijing, China
S
Sijian Ren
Tsinghua University, Beijing, China
Xinyuan Hu
Xinyuan Hu
Undergrad at Emory University
AILLM
Y
Yu Li
Tsinghua University, Beijing, China
J
Jie Feng
Tsinghua University, Beijing, China
C
Chen Gao
Tsinghua University, Beijing, China
Y
Yong Li
Tsinghua University, Beijing, China