Beyond Fast and Slow: Cognitive-Inspired Elastic Reasoning for Large Language Models

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) struggle to dynamically balance reasoning efficiency and accuracy in response to query complexity. Method: This paper proposes CogER, a cognition-inspired elastic reasoning framework that (1) establishes an elastic reasoning paradigm grounded in human hierarchical cognition; (2) formulates strategy selection as a reinforcement learning–driven Markov decision process for complexity-aware, dynamic mode switching; and (3) introduces a cognitive tool coordination mechanism enabling LLMs to autonomously invoke external tools for reasoning augmentation. Results: On in-domain and out-of-domain tasks, CogER improves average exact match accuracy by 13% and 8%, respectively—outperforming state-of-the-art test-time scaling methods. The framework achieves adaptive reasoning without compromising fidelity, demonstrating robust generalization across diverse query complexities and task distributions.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated impressive performance across various language tasks. However, existing LLM reasoning strategies mainly rely on the LLM itself with fast or slow mode (like o1 thinking) and thus struggle to balance reasoning efficiency and accuracy across queries of varying difficulties. In this paper, we propose Cognitive-Inspired Elastic Reasoning (CogER), a framework inspired by human hierarchical reasoning that dynamically selects the most suitable reasoning strategy for each query. Specifically, CogER first assesses the complexity of incoming queries and assigns them to one of several predefined levels, each corresponding to a tailored processing strategy, thereby addressing the challenge of unobservable query difficulty. To achieve automatic strategy selection, we model the process as a Markov Decision Process and train a CogER-Agent using reinforcement learning. The agent is guided by a reward function that balances solution quality and computational cost, ensuring resource-efficient reasoning. Moreover, for queries requiring external tools, we introduce Cognitive Tool-Assisted Reasoning, which enables the LLM to autonomously invoke external tools within its chain-of-thought. Extensive experiments demonstrate that CogER outperforms state-of-the-art Test-Time scaling methods, achieving at least a 13% relative improvement in average exact match on In-Domain tasks and an 8% relative gain on Out-of-Domain tasks.
Problem

Research questions and friction points this paper is trying to address.

Balancing reasoning efficiency and accuracy for varying query difficulties
Dynamically selecting optimal reasoning strategies for each query
Enabling autonomous external tool invocation for complex queries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic strategy selection based on query complexity assessment
Reinforcement learning agent balancing solution quality and cost
Autonomous tool invocation within chain-of-thought reasoning
🔎 Similar Papers
No similar papers found.