🤖 AI Summary
To address computational redundancy and reasoning interference caused by accumulating historical information during test-time scaling of large language models (LLMs), this paper proposes the Atomic Reasoning over Thoughts (AoT) framework. AoT models reasoning as a memoryless, Markovian state transition process, enabling lightweight, self-contained, verifiable subproblem solving via dependency-graph-driven atomic decomposition and directed acyclic graph (DAG) contraction—eliminating historical dependencies. It supports plug-and-play integration with existing methods (e.g., ToT, GoT). Evaluated on six benchmarks, AoT achieves significant gains: GPT-4o-mini + AoT attains 80.6% F1 on HotpotQA, outperforming O3-mini and DeepSeek-R1 by 3.4% and 10.6%, respectively. The core contribution is the introduction of memoryless state modeling and a structured atomic reasoning paradigm, thereby breaking the historical coupling bottleneck inherent in conventional test-time scaling approaches.
📝 Abstract
Large Language Models (LLMs) achieve superior performance through training-time scaling, and test-time scaling further enhances their capabilities by conducting effective reasoning during inference. However, as the scale of reasoning increases, existing test-time scaling methods suffer from accumulated historical information, which not only wastes computational resources but also interferes with effective reasoning. To address this issue, we observe that complex reasoning progress is often achieved by solving a sequence of independent subquestions, each being self-contained and verifiable. These subquestions are essentially atomic questions, relying primarily on their current state rather than accumulated history, similar to the memoryless transitions in a Markov process. Based on this observation, we propose Atom of Thoughts (AoT), where each state transition in the reasoning process consists of decomposing the current question into a dependency-based directed acyclic graph and contracting its subquestions, forming a new atomic question state. This iterative decomposition-contraction process continues until reaching directly solvable atomic questions, naturally realizing Markov transitions between question states. Furthermore, these atomic questions can be seamlessly integrated into existing test-time scaling methods, enabling AoT to serve as a plug-in enhancement for improving reasoning capabilities. Experiments across six benchmarks demonstrate the effectiveness of AoT both as a standalone framework and a plug-in enhancement. Notably, on HotpotQA, when applied to gpt-4o-mini, AoT achieves an 80.6% F1 score, surpassing o3-mini by 3.4% and DeepSeek-R1 by 10.6%. The code will be available at https://github.com/qixucen/atom.