Fast, Slow, and Tool-augmented Thinking for LLMs: A Review

📅 2025-08-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) lack a systematic framework for selecting appropriate reasoning strategies—such as intuitive response, stepwise deduction, or tool invocation—based on task requirements. To address this, we propose the first LLM reasoning strategy taxonomy grounded in cognitive psychology’s dual-process theory, jointly characterizing “fast/slow thinking” and “internal/external knowledge boundaries.” This framework explicitly defines three reasoning modes: intuitive, incremental, and tool-augmented. Through systematic literature review and categorical analysis, we model existing technical approaches, identify key decision factors—including task complexity, knowledge accessibility, and real-time constraints—and synthesize corresponding implementation mechanisms. Our framework establishes a theoretical foundation and practical guidance for interpretable strategy selection, dynamic scheduling, and controllable optimization of LLM reasoning, thereby advancing the development of more efficient and robust adaptive reasoning systems.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable progress in reasoning across diverse domains. However, effective reasoning in real-world tasks requires adapting the reasoning strategy to the demands of the problem, ranging from fast, intuitive responses to deliberate, step-by-step reasoning and tool-augmented thinking. Drawing inspiration from cognitive psychology, we propose a novel taxonomy of LLM reasoning strategies along two knowledge boundaries: a fast/slow boundary separating intuitive from deliberative processes, and an internal/external boundary distinguishing reasoning grounded in the model's parameters from reasoning augmented by external tools. We systematically survey recent work on adaptive reasoning in LLMs and categorize methods based on key decision factors. We conclude by highlighting open challenges and future directions toward more adaptive, efficient, and reliable LLMs.
Problem

Research questions and friction points this paper is trying to address.

Adapting LLM reasoning strategies to task demands
Classifying reasoning by fast/slow and internal/external boundaries
Surveying adaptive reasoning methods for efficient LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fast and slow reasoning strategies for LLMs
Tool-augmented thinking for enhanced reasoning
Taxonomy based on cognitive psychology principles
🔎 Similar Papers
No similar papers found.