🤖 AI Summary
Existing mathematical evaluation benchmarks suffer from test-set memorization and overreliance on fixed symbolic rules, leading to overfitting and inaccurate assessment of reasoning capabilities. To address this, we propose a dynamic counterfactual evaluation framework that generates programmable, memory-resistant test instances via semantic perturbation of symbols—reinterpreting the logical meaning of numerals and operators—and integrates automated answer verification to construct a dynamic problem space spanning both inductive and deductive reasoning. The framework enables controllable seed-based generation and reproducible scalability, significantly enhancing evaluation robustness. Experimental results show that mainstream LMs perform well on standard deductive tasks but exhibit poor generalization; even math-specialized fine-tuned models fail under semantic perturbations. This confirms the benchmark’s high sensitivity to overfitting and its effectiveness in distinguishing genuine mathematical reasoning from superficial pattern matching.
📝 Abstract
Conducting contamination-free evaluation of mathematical capabilities can be difficult for two reasons: models may memorize a test set once it is made public, and current mathematical benchmarks are prone to overfitting due to having limited diversity of symbols and rules, coupled with closed-ended answers. This paper proposes a method to leverage these shortcomings as useful features to a construct dynamic, counterfactual benchmark, which can be used to both reveal overfitting and measure true reasoning. We demonstrate this via MatheMagic, which generates math test instances with the interpretations of numbers and operators altered, yet has automatically verifiable answers. Test instances are randomly seeded and constructed at test time to evaluate a model's induction or deduction capability, offering stability, extensibility, comparability, and robustness to overfitting. Our experiments find that models solve deduction more easily than induction, but they revert to standard math. Further analysis reveals that math-adapted models fail to exhibit a general "skill" of reasoning, and fine-tuning on induction tasks generalizes poorly.