π€ AI Summary
This study investigates whether large language models genuinely comprehend mathematical concepts and reasoning principles or merely rely on statistical memorization. To address this, we propose βAtomic Reasoning,β a novel paradigm that decouples mathematical reasoning along two orthogonal dimensions: disciplinary domains (algebra, geometry, analysis, topology) and logical structures (forward multi-step deduction vs. counterexample-driven backward reasoning). We construct a fine-grained benchmark and training dataset grounded in formal task design and controlled experiments, enabling isolated evaluation and cross-capability transfer analysis of atomic reasoning skills. Empirical results reveal significant performance disparities across atomic capabilities and uncover asymmetric dependency relationships among them. Our framework provides both a theoretically grounded methodology and empirical foundation for developing interpretable, cognitively aligned mathematical reasoning models. (132 words)
π Abstract
Large Language Models (LLMs) have demonstrated outstanding performance in mathematical reasoning capabilities. However, we argue that current large-scale reasoning models primarily rely on scaling up training datasets with diverse mathematical problems and long thinking chains, which raises questions about whether LLMs genuinely acquire mathematical concepts and reasoning principles or merely remember the training data. In contrast, humans tend to break down complex problems into multiple fundamental atomic capabilities. Inspired by this, we propose a new paradigm for evaluating mathematical atomic capabilities. Our work categorizes atomic abilities into two dimensions: (1) field-specific abilities across four major mathematical fields, algebra, geometry, analysis, and topology, and (2) logical abilities at different levels, including conceptual understanding, forward multi-step reasoning with formal math language, and counterexample-driven backward reasoning. We propose corresponding training and evaluation datasets for each atomic capability unit, and conduct extensive experiments about how different atomic capabilities influence others, to explore the strategies to elicit the required specific atomic capability. Evaluation and experimental results on advanced models show many interesting discoveries and inspirations about the different performances of models on various atomic capabilities and the interactions between atomic capabilities. Our findings highlight the importance of decoupling mathematical intelligence into atomic components, providing new insights into model cognition and guiding the development of training strategies toward a more efficient, transferable, and cognitively grounded paradigm of "atomic thinking".