MATH-Perturb: Benchmarking LLMs' Math Reasoning Abilities against Hard Perturbations

📅 2025-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the reasoning robustness of large language models (LLMs) on mathematical problems under “hard perturbations”—semantic alterations that fundamentally invalidate original solution strategies—thereby distinguishing genuine reasoning from memory-dependent behavior. Method: To address the lack of standardized hard-perturbation evaluation, the authors construct two new benchmarks—MATH-P-Simple and MATH-P-Hard—derived from the most challenging problems in the MATH dataset and generated via a hybrid human-and-rule-driven perturbation process. Contribution/Results: This is the first systematic definition and empirical assessment of hard perturbations in mathematical reasoning. It reveals a novel memorization phenomenon: LLMs blindly reuse solution templates during in-context learning, even when contextually inappropriate. Cross-model experiments show that MATH-P-Hard induces up to a −16.49% accuracy drop on state-of-the-art models (e.g., o1-mini), exposing severe contextual misdirection and pattern mismatch in current mathematical reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Large language models have demonstrated impressive performance on challenging mathematical reasoning tasks, which has triggered the discussion of whether the performance is achieved by true reasoning capability or memorization. To investigate this question, prior work has constructed mathematical benchmarks when questions undergo simple perturbations -- modifications that still preserve the underlying reasoning patterns of the solutions. However, no work has explored hard perturbations, which fundamentally change the nature of the problem so that the original solution steps do not apply. To bridge the gap, we construct MATH-P-Simple and MATH-P-Hard via simple perturbation and hard perturbation, respectively. Each consists of 279 perturbed math problems derived from level-5 (hardest) problems in the MATH dataset (Hendrycksmath et. al., 2021). We observe significant performance drops on MATH-P-Hard across various models, including o1-mini (-16.49%) and gemini-2.0-flash-thinking (-12.9%). We also raise concerns about a novel form of memorization where models blindly apply learned problem-solving skills without assessing their applicability to modified contexts. This issue is amplified when using original problems for in-context learning. We call for research efforts to address this challenge, which is critical for developing more robust and reliable reasoning models.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' true math reasoning vs. memorization.
Exploring hard perturbations' impact on math problem-solving.
Identifying memorization risks in modified problem contexts.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hard perturbation analysis
MATH-P benchmark construction
Memorization issue identification
🔎 Similar Papers
No similar papers found.
K
Kaixuan Huang
Princeton University
J
Jiacheng Guo
Princeton University
Z
Zihao Li
Princeton University
X
Xiang Ji
Princeton University
J
Jiawei Ge
Princeton University
Wenzhe Li
Wenzhe Li
Princeton University
Yingqing Guo
Yingqing Guo
Princeton University
Diffusion ModelsGenerative AI
Tianle Cai
Tianle Cai
PhD Student, Princeton University
Machine Learning
H
Hui Yuan
Princeton University
Runzhe Wang
Runzhe Wang
Princeton University
Y
Yue Wu
Princeton University
M
Ming Yin
Princeton University
Shange Tang
Shange Tang
Princeton University
Machine learningStatistics
Yangsibo Huang
Yangsibo Huang
Google DeepMind
Machine Learning
Chi Jin
Chi Jin
Assistant Professor, Princeton University
Machine LearningOptimization
X
Xinyun Chen
Google
Chiyuan Zhang
Chiyuan Zhang
Google Research
Machine LearningComputational Neuroscience
M
Mengdi Wang
Princeton University