🤖 AI Summary
Current AI systems exhibit significant limitations in frontier mathematical reasoning—spanning number theory, real analysis, algebraic geometry, and category theory—necessitating high-fidelity, expert-grounded evaluation benchmarks. Method: We introduce the first original benchmark targeting research-level mathematical competence: it comprises hundreds of novel, unpublished problems authored by domain experts, each requiring hours to days to solve; we pioneer a dual-mechanism design combining expert collaborative problem authoring with formalized answer verification to eliminate data contamination and subjective scoring bias; and we propose a scalable, breadth- and difficulty-stratified evaluation framework. Contribution/Results: Experiments reveal that state-of-the-art AI models solve fewer than 2% of the problems—quantifying, for the first time, the fundamental capability gap between AI and human mathematicians—and establish a new rigorous standard for mathematical reasoning evaluation.
📝 Abstract
We introduce FrontierMath, a benchmark of hundreds of original, exceptionally challenging mathematics problems crafted and vetted by expert mathematicians. The questions cover most major branches of modern mathematics -- from computationally intensive problems in number theory and real analysis to abstract questions in algebraic geometry and category theory. Solving a typical problem requires multiple hours of effort from a researcher in the relevant branch of mathematics, and for the upper end questions, multiple days. FrontierMath uses new, unpublished problems and automated verification to reliably evaluate models while minimizing risk of data contamination. Current state-of-the-art AI models solve under 2% of problems, revealing a vast gap between AI capabilities and the prowess of the mathematical community. As AI systems advance toward expert-level mathematical abilities, FrontierMath offers a rigorous testbed that quantifies their progress.