🤖 AI Summary
Blind scaling of large language models (LLMs)—through increased model size or training data—does not necessarily improve reasoning performance and may degrade logical consistency, robustness, and alignment. Method: The authors propose the first five-dimensional scaling taxonomy for LLM reasoning capabilities, encompassing input context length, reasoning step depth, interaction turn count, training-driven reasoning, and task complexity. They conduct systematic cross-paradigm comparisons and multi-dimensional attribution analysis, integrating empirical evaluations under a unified assessment framework. Contribution/Results: Empirical findings reveal substantial returns from scaling input context and interaction turns, whereas increasing single-step reasoning depth consistently exacerbates hallucination. The study establishes principled, scale–task co-design guidelines and an evolutionary roadmap for building trustworthy AI reasoning systems, grounded in empirically validated trade-offs across scaling dimensions.
📝 Abstract
The rapid advancements in large Language models (LLMs) have significantly enhanced their reasoning capabilities, driven by various strategies such as multi-agent collaboration. However, unlike the well-established performance improvements achieved through scaling data and model size, the scaling of reasoning in LLMs is more complex and can even negatively impact reasoning performance, introducing new challenges in model alignment and robustness. In this survey, we provide a comprehensive examination of scaling in LLM reasoning, categorizing it into multiple dimensions and analyzing how and to what extent different scaling strategies contribute to improving reasoning capabilities. We begin by exploring scaling in input size, which enables LLMs to process and utilize more extensive context for improved reasoning. Next, we analyze scaling in reasoning steps that improves multi-step inference and logical consistency. We then examine scaling in reasoning rounds, where iterative interactions refine reasoning outcomes. Furthermore, we discuss scaling in training-enabled reasoning, focusing on optimization through iterative model improvement. Finally, we review applications of scaling across domains and outline future directions for further advancing LLM reasoning. By synthesizing these diverse perspectives, this survey aims to provide insights into how scaling strategies fundamentally enhance the reasoning capabilities of LLMs and further guide the development of next-generation AI systems.