SenseMath: Do LLMs Have Number Sense? Evaluating Shortcut Use, Judgment, and Generation

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models possess human-like numerical intuition—specifically, the ability to recognize numerical structures and appropriately employ or avoid computational shortcuts. To this end, the authors introduce SenseMath, a benchmark comprising 4,800 problems spanning eight shortcut types and four digit-length scales. Model performance is evaluated through three tasks: shortcut utilization, applicability judgment, and problem generation, assessing structural sensitivity in numerical reasoning. Experiments across multiple mainstream models employ controlled designs, standard chain-of-thought prompting, and explicit instructions. Results show that while explicit prompting boosts accuracy by up to 15%, models spontaneously use valid shortcuts in fewer than 40% of applicable cases and frequently misapply shortcuts or fail to generate effective ones, indicating procedural fluency without genuine structural understanding.
📝 Abstract
Large language models often default to step-by-step computation even when efficient numerical shortcuts are available. This raises a basic question: do they exhibit number sense in a human-like behavioral sense, i.e., the ability to recognize numerical structure, apply shortcuts when appropriate, and avoid them when they are not? We introduce SenseMath, a controlled benchmark for evaluating structure-sensitive numerical reasoning in LLMs. SenseMath contains 4,800 items spanning eight shortcut categories and four digit scales, with matched strong-shortcut, weak-shortcut, and control variants. It supports three evaluation settings of increasing cognitive demand: Shortcut Use (whether models can apply shortcuts on shortcut-amenable problems); Applicability Judgment (whether they can recognize when a shortcut is appropriate or misleading); and Problem Generation (whether they can generate new problem items that correctly admit a given type of shortcut). Our evaluation across five LLMs, ranging from GPT-4o-mini to Llama-3.1-8B, shows a consistent pattern: when explicitly prompted, models readily adopt shortcut strategies and achieve substantial accuracy gains on shortcut-amenable items (up to 15%), yet under standard chain-of-thought prompting they spontaneously employ such strategies in fewer than 40% of cases, even when they demonstrably possess the requisite capability. Moreover, this competence is confined to the Use level; models systematically over-generalise shortcuts to problems where they do not apply, and fail to generate valid shortcut-bearing problems from scratch. Together, these results suggest that current LLMs exhibit procedural shortcut fluency without the structural understanding of when and why shortcuts work that underlies human number sense.
Problem

Research questions and friction points this paper is trying to address.

number sense
numerical reasoning
shortcut use
large language models
cognitive evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

number sense
numerical reasoning
shortcut strategies
controlled benchmark
structure-sensitive evaluation
🔎 Similar Papers
No similar papers found.