🤖 AI Summary
This study investigates whether large language models (LLMs) possess genuine understanding in mathematical reasoning, focusing on their robustness deficiencies under semantic-irrelevant distractors and absence of core instructional cues. We propose the first progressive perturbation evaluation framework tailored to mathematical reasoning, incorporating multi-dimensional perturbations—including numeric/non-numeric distractor injection and deletion of essential query instructions. Experimental results reveal that LLMs exhibit high sensitivity to numeric information, with performance drops up to 51.55%; even state-of-the-art commercial models suffer 3–10% degradation. Crucially, models retain 20–40% accuracy when core instructions are omitted—indicating heavy reliance on template matching rather than logical deduction. This work systematically exposes the superficiality and fragility of LLMs’ mathematical reasoning capabilities, challenging prevailing assumptions about their reasoning depth. It establishes a novel, rigorous paradigm for assessing trustworthiness in AI systems, emphasizing functional robustness over isolated accuracy metrics.
📝 Abstract
LLMs have made significant progress in the field of mathematical reasoning, but whether they have true the mathematical understanding ability is still controversial. To explore this issue, we propose a new perturbation framework to evaluate LLMs'reasoning ability in complex environments by injecting additional semantically irrelevant perturbation sentences and gradually increasing the perturbation intensity. At the same time, we use an additional perturbation method: core questioning instruction missing, to further analyze the LLMs'problem-solving mechanism. The experimental results show that LLMs perform stably when facing perturbation sentences without numbers, but there is also a robustness boundary. As the perturbation intensity increases, the performance exhibits varying degrees of decline; when facing perturbation sentences with numbers, the performance decreases more significantly, most open source models with smaller parameters decrease by nearly or even more than 10%, and further increasing with the enhancement of perturbation intensity, with the maximum decrease reaching 51.55%. Even the most advanced commercial LLMs have seen a 3%-10% performance drop. By analyzing the reasoning process of LLMs in detail, We find that models are more sensitive to perturbations with numerical information and are more likely to give incorrect answers when disturbed by irrelevant numerical information. The higher the perturbation intensity, the more obvious these defects are. At the same time, in the absence of core questioning instruction, models can still maintain an accuracy of 20%-40%, indicating that LLMs may rely on memory templates or pattern matching to complete the task, rather than logical reasoning. In general, our work reveals the shortcomings and limitations of current LLMs in their reasoning capabilities, which is of great significance for the further development of LLMs.