MSCR: Exploring the Vulnerability of LLMs'Mathematical Reasoning Abilities Using Multi-Source Candidate Replacement

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically reveals the vulnerability of large language models (LLMs) to input perturbations in mathematical reasoning tasks. To address limitations of existing adversarial attacks—namely poor scalability, low semantic fidelity, and high computational cost—we propose MSCR (Multi-Source Consistent Robustness attack), an automated, multi-source semantic-consistent adversarial framework. MSCR integrates cosine similarity in embedding space, WordNet lexical knowledge, and masked language model contextual prediction to construct high-fidelity candidate word sets, enabling fine-grained (e.g., character-level) substitutions. Evaluated on GSM8K and MATH500 benchmarks, MSCR achieves up to 49.89% and 35.40% accuracy degradation using single-character perturbations alone, while significantly inducing response redundancy and reasoning inefficiency. This work establishes the first scalable evaluation framework for mathematical reasoning robustness, providing novel analytical tools and empirical evidence to advance trustworthy LLM-based reasoning.

Technology Category

Application Category

📝 Abstract
LLMs demonstrate performance comparable to human abilities in complex tasks such as mathematical reasoning, but their robustness in mathematical reasoning under minor input perturbations still lacks systematic investigation. Existing methods generally suffer from limited scalability, weak semantic preservation, and high costs. Therefore, we propose MSCR, an automated adversarial attack method based on multi-source candidate replacement. By combining three information sources including cosine similarity in the embedding space of LLMs, the WordNet dictionary, and contextual predictions from a masked language model, we generate for each word in the input question a set of semantically similar candidates, which are then filtered and substituted one by one to carry out the attack. We conduct large-scale experiments on LLMs using the GSM8K and MATH500 benchmarks. The results show that even a slight perturbation involving only a single word can significantly reduce the accuracy of all models, with the maximum drop reaching 49.89% on GSM8K and 35.40% on MATH500, while preserving the high semantic consistency of the perturbed questions. Further analysis reveals that perturbations not only lead to incorrect outputs but also substantially increase the average response length, which results in more redundant reasoning paths and higher computational resource consumption. These findings highlight the robustness deficiencies and efficiency bottlenecks of current LLMs in mathematical reasoning tasks.
Problem

Research questions and friction points this paper is trying to address.

Investigating LLM vulnerability to minor input perturbations in mathematical reasoning
Addressing limited scalability and weak semantic preservation in existing attack methods
Analyzing robustness deficiencies and efficiency bottlenecks in mathematical reasoning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-source candidate replacement for adversarial attacks
Combining embedding similarity, WordNet, and masked language model
Filtering and substituting words to preserve semantic consistency
🔎 Similar Papers
No similar papers found.
Z
Zhishen Sun
Xi’an Jiaotong University
G
Guang Dai
SGIT AI Lab
Haishan Ye
Haishan Ye
西安交通大学