🤖 AI Summary
This study investigates the capability of large language models (LLMs) to perform approximate mathematical reasoning in informal, fast-paced settings, with a focus on non-autoregressive decoder architectures. To this end, we introduce StreetMath—the first benchmark tailored to realistic approximate computation—and systematically evaluate models including Qwen, Dream, and Falcon-Mamba. Mechanistic interpretability analyses reveal that these models lack “cognitive miser” tendencies: exact and approximate reasoning rely on distinct neural subnetworks; moreover, they frequently overcompute or invoke external tools despite having already generated reasonable approximations early in inference. Our work bridges a critical gap in the study of approximate reasoning for non-autoregressive LLMs, providing the first empirical evidence that approximate reasoning capabilities remain underutilized. We publicly release the StreetMath benchmark, evaluation framework, and analysis code to support future research.
📝 Abstract
There is a substantial body of literature examining the mathematical reasoning capabilities of large language models (LLMs), particularly their performance on precise arithmetic operations in autoregressive architectures. However, their ability to perform approximate reasoning in informal, fast-paced mathematical operations has received far less attention, especially among non-autoregressive decoder models. Our work addresses this gap by introducing StreetMath, a benchmark designed to evaluate models' approximation abilities under real-world approximation scenarios. We conduct extensive evaluations across different LLM architectures: Qwen3-4B-Instruct-2507, Qwen3-4B-Thinking-2507, Dream-v0-Instruct-7B, Falcon-Mamba-7B-Instruct, and Mamba-GPT-3B. Furthermore, we apply mechanistic interpretability techniques to probe their internal computational states. Our analysis reveals that LLMs generally attempt to compute exact values or invoke external tools even in tasks that call for approximation. Moreover, while models sometimes reach the correct answer in early layers or steps, they still consume more tokens when solving approximation tasks. Additional experiments indicate that exact and approximate arithmetic operations rely on largely separate neural components. Drawing upon research on cognitive psychology, we argue that LLMs do not exhibit cognitive miserliness in the same way humans do in street math settings. We open source our work https://github.com/ctseng777/StreetMath