🤖 AI Summary
Large language models face two key bottlenecks in logical “slow thinking” reasoning: fragmented reasoning processes impairing logical coherence, and high-dimensional search spaces causing prohibitive computational complexity. To address these, we propose the Atomic Reasoning (AR) framework, which decomposes reasoning into schedulable atomic cognitive units and constructs structured reasoning paths via dynamic cognitive routing. AR introduces—first in the literature—atomic-level cognitive decomposition coupled with a graph-structured dynamic routing mechanism, preserving logical coherence while substantially improving computational efficiency. Experimental results demonstrate that AR achieves a 19.3% absolute accuracy gain over Chain-of-Thought and Tree-of-Thought on long-range logical reasoning tasks (e.g., linguistic logic puzzles), while reducing search overhead by 67%. Critically, AR enables the first computationally tractable, controllable, and interpretable modeling of human-like deep reasoning processes.
📝 Abstract
Recent advances in large language models (LLMs) have shown remarkable progress, yet their capacity for logical ``slow-thinking'' reasoning persists as a critical research frontier. Current inference scaling paradigms suffer from two fundamental constraints: fragmented thought flows compromising logical coherence, and intensively computational complexity that escalates with search space dimensions. To overcome these limitations, we present extbf{Atomic Reasoner} ( extbf{AR}), a cognitive inference strategy that enables fine-grained reasoning through systematic atomic-level operations. AR decomposes the reasoning process into atomic cognitive units, employing a cognitive routing mechanism to dynamically construct reasoning representations and orchestrate inference pathways. This systematic methodology implements stepwise, structured cognition, which ensures logical coherence while significantly reducing cognitive load, effectively simulating the cognitive patterns observed in human deep thinking processes. Extensive experimental results demonstrate AR's superior reasoning capabilities without the computational burden of exhaustive solution searches, particularly excelling in linguistic logic puzzles. These findings substantiate AR's effectiveness in enhancing LLMs' capacity for robust, long-sequence logical reasoning and deliberation.