🤖 AI Summary
This work investigates why large language models (LLMs) significantly underperform on formal mathematical reasoning—particularly theorem proving—compared to programming tasks. We identify three core challenges: (1) the trade-off between formal and informal mathematical training, (2) the heightened fragility of proof generation versus code synthesis, and (3) whether LLMs genuinely represent—or merely mimic—logical state evolution. To address these, we propose a comparative analytical framework that systematically examines structural disparities between programming and proof tasks, differences in supervision paradigms, and models’ capacity for explicit state tracking. Our evaluation leverages state-of-the-art mathematical reasoning benchmarks. The key finding is that LLMs’ lack of explicit deductive state modeling constitutes a fundamental bottleneck in formal reasoning. This insight yields concrete theoretical foundations and design principles for next-generation mathematical AI systems capable of logically consistent, verifiable state evolution.
📝 Abstract
Large Language Models (LLMs) have shown remarkable abilities in structured reasoning and symbolic tasks, with coding emerging as a particular area of strength. This success has sparked growing interest in applying LLMs to mathematics, both in informal problem-solving and formal theorem proving. However, progress in formal mathematics has proven to be significantly more difficult, despite surface-level similarities between programming and proof construction. This discrepancy raises important questions about how LLMs ``reason'', how they are supervised, and whether they internally track a notion of computational or deductive state. In this article, we address the state-of-the-art of the discipline, focusing on recent models and benchmarks, and explore three central issues at the intersection of machine learning and mathematical cognition: (i) the trade-offs between formal and informal mathematics as training domains; (ii) the deeper reasons why proof generation remains more brittle than code synthesis; (iii) and the question of whether LLMs represent, or merely mimic, a notion of evolving logical state. Our goal is not to draw hard boundaries, but to identify where the current limits lie, and how they might be extended.