Chain of Mindset: Reasoning with Adaptive Cognitive Modes

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of current large language models, which employ fixed reasoning paradigms and struggle to adapt to the varying cognitive demands across different reasoning stages. The authors propose a training-free agent framework that decomposes reasoning into four heterogeneous cognitive modes—spatial, aggregative, divergent, and algorithmic—dynamically orchestrated by a meta-agent based on the current reasoning state. A bidirectional contextual gating mechanism efficiently coordinates information flow among these modes. This approach achieves the first dynamic collaboration of multiple cognitive modes, transcending the constraints of conventional fixed reasoning paradigms. It attains state-of-the-art performance across six benchmarks spanning mathematical reasoning, code generation, scientific question answering, and spatial reasoning, improving accuracy by 4.96% and 4.72% over Qwen3-VL-32B-Instruct and Gemini-2.0-Flash, respectively, while maintaining computational efficiency.

Technology Category

Application Category

📝 Abstract
Human problem-solving is never the repetition of a single mindset, by which we mean a distinct mode of cognitive processing. When tackling a specific task, we do not rely on a single mindset; instead, we integrate multiple mindsets within the single solution process. However, existing LLM reasoning methods fall into a common trap: they apply the same fixed mindset across all steps, overlooking that different stages of solving the same problem require fundamentally different mindsets. This single-minded assumption prevents models from reaching the next level of intelligence. To address this limitation, we propose Chain of Mindset (CoM), a training-free agentic framework that enables step-level adaptive mindset orchestration. CoM decomposes reasoning into four functionally heterogeneous mindsets: Spatial, Convergent, Divergent, and Algorithmic. A Meta-Agent dynamically selects the optimal mindset based on the evolving reasoning state, while a bidirectional Context Gate filters cross-module information flow to maintain effectiveness and efficiency. Experiments across six challenging benchmarks spanning mathematics, code generation, scientific QA, and spatial reasoning demonstrate that CoM achieves state-of-the-art performance, outperforming the strongest baseline by 4.96\% and 4.72\% in overall accuracy on Qwen3-VL-32B-Instruct and Gemini-2.0-Flash, while balancing reasoning efficiency. Our code is publicly available at \href{https://github.com/QuantaAlpha/chain-of-mindset}{https://github.com/QuantaAlpha/chain-of-mindset}.
Problem

Research questions and friction points this paper is trying to address.

reasoning
mindset
cognitive modes
large language models
adaptive reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chain of Mindset
adaptive cognitive modes
Meta-Agent
Context Gate
heterogeneous reasoning
🔎 Similar Papers
No similar papers found.
T
Tianyi Jiang
PKU
A
Arctanx An
PKU
H
Hengyi Feng
PKU
N
Naixin Zhai
QuantaAlpha
Haodong Li
Haodong Li
UC San Diego. Prev: HKUST, ZJU, Tencent.
3DVGenerative ModelsAgents
X
Xiaomin Yu
QuantaAlpha
Jiahui Liu
Jiahui Liu
Fujitsu Research of America
Quantum ComputingCryptographyQuantum Cryptography
Hanwen Du
Hanwen Du
The Ohio State University
Machine Learning
S
Shuo Zhang
QuantaAlpha
Z
Zhi Yang
SUFE
J
Jie Huang
SUFE
Y
Yuhua Li
QuantaAlpha
Yongxin Ni
Yongxin Ni
National University of Singapore
Recommender Systems
H
Huacan Wang
QuantaAlpha
R
Ronghao Chen
PKU, QuantaAlpha