Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the “overthinking” problem in large language models (LLMs) during chain-of-thought (CoT) reasoning—caused by redundant inference steps—this paper introduces the first structured taxonomy for efficient LLM inference. The framework systematically organizes advances along three dimensions: model architecture optimization, dynamic inference compression, and difficulty-aware prompting control. Methodologically, it innovatively unifies model lightweighting, dynamic step pruning, and prompt difficulty modeling, proposing for the first time a synergistic paradigm combining small-model distillation with lightweight data training. Techniques covered include supervised fine-tuning, reinforcement learning, CoT optimization, inference truncation, and benchmark design. Contributions include: (i) the first comprehensive survey framework for efficient LLM inference; (ii) clarification of key technical pathways and standardized evaluation metrics; and (iii) significant reductions in inference latency and computational overhead—providing a systematic methodological foundation for efficient LLM deployment.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities in complex tasks. Recent advancements in Large Reasoning Models (LRMs), such as OpenAI o1 and DeepSeek-R1, have further improved performance in System-2 reasoning domains like mathematics and programming by harnessing supervised fine-tuning (SFT) and reinforcement learning (RL) techniques to enhance the Chain-of-Thought (CoT) reasoning. However, while longer CoT reasoning sequences improve performance, they also introduce significant computational overhead due to verbose and redundant outputs, known as the"overthinking phenomenon". In this paper, we provide the first structured survey to systematically investigate and explore the current progress toward achieving efficient reasoning in LLMs. Overall, relying on the inherent mechanism of LLMs, we categorize existing works into several key directions: (1) model-based efficient reasoning, which considers optimizing full-length reasoning models into more concise reasoning models or directly training efficient reasoning models; (2) reasoning output-based efficient reasoning, which aims to dynamically reduce reasoning steps and length during inference; (3) input prompts-based efficient reasoning, which seeks to enhance reasoning efficiency based on input prompt properties such as difficulty or length control. Additionally, we introduce the use of efficient data for training reasoning models, explore the reasoning capabilities of small language models, and discuss evaluation methods and benchmarking.
Problem

Research questions and friction points this paper is trying to address.

Address computational overhead in long reasoning sequences.
Optimize reasoning models for efficiency and conciseness.
Enhance reasoning efficiency through input prompt properties.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimize full-length reasoning models concisely
Dynamically reduce reasoning steps during inference
Enhance reasoning efficiency via input prompt properties
🔎 Similar Papers
No similar papers found.