🤖 AI Summary
Current evaluations of mathematical reasoning in large language models rely on static benchmarks, which struggle to encompass cutting-edge advances and are prone to saturation. To address this limitation, this work proposes the first dynamic evaluation framework that evolves in synchrony with human mathematical discovery. The framework employs an automated pipeline to transform recent mathematical literature into executable and verifiable reasoning tasks, supporting temporal scalability, intrinsic correctness verification, and subfield customization. Its core technical components include automatic extraction of constructive results, generation of parameterized problem templates, and execution-driven deterministic solution validation. Using this framework, we construct EternalMath, a dynamic benchmark that reveals substantial performance gaps in state-of-the-art models when tackling frontier mathematical reasoning tasks.
📝 Abstract
Current evaluations of mathematical reasoning in large language models (LLMs) are dominated by static benchmarks, either derived from competition-style problems or curated through costly expert effort, resulting in limited coverage of research-level mathematics and rapid performance saturation. We propose a fully automated, theorem-grounded pipeline for evaluating frontier mathematical reasoning, which directly transforms recent peer-reviewed mathematical literature into executable and verifiable reasoning tasks. The pipeline identifies constructive or quantitative results, instantiates them into parameterized problem templates, and generates deterministic solutions through execution-based verification, enabling scalable, reproducible, and continuously updatable evaluation without reliance on large-scale expert authoring. By design, this approach supports temporal extensibility, intrinsic correctness checking, and domain-specific customization across mathematical subfields. Applying this pipeline yields \textbf{EternalMath}, an evolving evaluation suite derived from contemporary research papers. Experiments with state-of-the-art LLMs reveal substantial performance gaps, indicating that mathematical reasoning at the research frontier remains far from saturated and underscoring the need for evaluation methodologies that evolve in step with human mathematical discovery.