MatheMagic: Generating Dynamic Mathematics Benchmarks Robust to Memorization

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing mathematical evaluation benchmarks suffer from test-set memorization and overreliance on fixed symbolic rules, leading to overfitting and inaccurate assessment of reasoning capabilities. To address this, we propose a dynamic counterfactual evaluation framework that generates programmable, memory-resistant test instances via semantic perturbation of symbols—reinterpreting the logical meaning of numerals and operators—and integrates automated answer verification to construct a dynamic problem space spanning both inductive and deductive reasoning. The framework enables controllable seed-based generation and reproducible scalability, significantly enhancing evaluation robustness. Experimental results show that mainstream LMs perform well on standard deductive tasks but exhibit poor generalization; even math-specialized fine-tuned models fail under semantic perturbations. This confirms the benchmark’s high sensitivity to overfitting and its effectiveness in distinguishing genuine mathematical reasoning from superficial pattern matching.

Technology Category

Application Category

📝 Abstract
Conducting contamination-free evaluation of mathematical capabilities can be difficult for two reasons: models may memorize a test set once it is made public, and current mathematical benchmarks are prone to overfitting due to having limited diversity of symbols and rules, coupled with closed-ended answers. This paper proposes a method to leverage these shortcomings as useful features to a construct dynamic, counterfactual benchmark, which can be used to both reveal overfitting and measure true reasoning. We demonstrate this via MatheMagic, which generates math test instances with the interpretations of numbers and operators altered, yet has automatically verifiable answers. Test instances are randomly seeded and constructed at test time to evaluate a model's induction or deduction capability, offering stability, extensibility, comparability, and robustness to overfitting. Our experiments find that models solve deduction more easily than induction, but they revert to standard math. Further analysis reveals that math-adapted models fail to exhibit a general "skill" of reasoning, and fine-tuning on induction tasks generalizes poorly.
Problem

Research questions and friction points this paper is trying to address.

Creating dynamic math benchmarks resistant to test memorization issues
Generating verifiable math problems with altered number interpretations
Measuring true reasoning capabilities beyond pattern memorization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates math tests with altered number interpretations
Constructs dynamic benchmarks via random seeding
Automatically verifies answers for counterfactual instances
🔎 Similar Papers
No similar papers found.