DynamicMind: A Tri-Mode Thinking System for Large Language Models

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) struggle to dynamically align reasoning depth with task complexity in zero-shot question answering. Method: This paper proposes a tri-modal autonomous thinking system—Fast/Normal/Slow—extending the dual-process cognitive framework to three distinct reasoning modes. We introduce “thinking density” as a quantitative metric for reasoning intensity, construct the TMC (Task Complexity) benchmark dataset, and design a lightweight Mind Router classifier for adaptive mode selection. The approach integrates cognition-inspired prompting, dynamic mode prediction, and resource–complexity alignment modeling. Contribution/Results: Evaluated on mathematical, commonsense, and scientific QA benchmarks, our method achieves significant zero-shot performance gains while reducing redundant computation, striking an optimal trade-off between accuracy and efficiency.

Technology Category

Application Category

📝 Abstract
Modern large language models (LLMs) often struggle to dynamically adapt their reasoning depth to varying task complexities, leading to suboptimal performance or inefficient resource utilization. To address this, we introduce DynamicMind, a novel tri-mode thinking system. DynamicMind empowers LLMs to autonomously select between Fast, Normal, and Slow thinking modes for zero-shot question answering (ZSQA) tasks through cognitive-inspired prompt engineering. Our framework's core innovations include: (1) expanding the established dual-process framework of fast and slow thinking into a tri-mode thinking system involving a normal thinking mode to preserve the intrinsic capabilities of LLM; (2) proposing the Thinking Density metric, which aligns computational resource allocation with problem complexity; and (3) developing the Thinking Mode Capacity (TMC) dataset and a lightweight Mind Router to predict the optimal thinking mode. Extensive experiments across diverse mathematical, commonsense, and scientific QA benchmarks demonstrate that DynamicMind achieves superior ZSQA capabilities while establishing an effective trade-off between performance and computational efficiency.
Problem

Research questions and friction points this paper is trying to address.

LLMs struggle to adapt reasoning depth to task complexity
Need efficient resource use without sacrificing performance
Requires dynamic thinking mode selection for zero-shot QA
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tri-mode thinking system for dynamic reasoning
Thinking Density metric for resource allocation
Mind Router predicts optimal thinking mode
🔎 Similar Papers
No similar papers found.
W
Wei Li
Southern University of Science and Technology
Y
Yanbin Wei
Southern University of Science and Technology, Hong Kong University of Science and Technology
Qiushi Huang
Qiushi Huang
University of Surrey
Natural Language ProcessingNatural Language UnderstandingNatural Language Generation
J
Jiangyue Yan
Southern University of Science and Technology
Y
Yang Chen
Southern University of Science and Technology
James T. Kwok
James T. Kwok
Professor of Computer Science and Engineering, Hong Kong University of Science and Technology
Machine learning
Y
Yu Zhang
Southern University of Science and Technology