HorizonMath: Measuring AI Progress Toward Mathematical Discovery with Automatic Verification

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work aims to evaluate the potential of artificial intelligence for original research on significant unsolved mathematical problems. To this end, the authors introduce MathBench—the first automatically verifiable benchmark specifically designed for open mathematical problems with unknown solutions—comprising over 100 challenges spanning eight domains of computational and applied mathematics, accompanied by an open-source verification framework that mitigates data contamination and enables large-scale evaluation. By integrating large language models, formal reasoning tools, and automated verification algorithms, the platform facilitates a systematic assessment of AI’s capacity for mathematical discovery. Experimental results indicate that even state-of-the-art models, such as GPT-5.4 Pro, achieve near-zero overall scores, with only two instances yielding solutions potentially superior to existing results—pending expert validation—thereby underscoring both the difficulty of the tasks and the effectiveness of the proposed benchmark.

Technology Category

Application Category

📝 Abstract
Can AI make progress on important, unsolved mathematical problems? Large language models are now capable of sophisticated mathematical and scientific reasoning, but whether they can perform novel research is still widely debated and underexplored. We introduce HorizonMath, a benchmark of over 100 predominantly unsolved problems spanning 8 domains in computational and applied mathematics, paired with an open-source evaluation framework for automated verification. Our benchmark targets a class of problems where discovery is hard, requiring meaningful mathematical insight, but verification is computationally efficient and simple. Because these solutions are unknown, HorizonMath is immune to data contamination, and most state-of-the-art models score near 0%. Existing research-level benchmarks instead rely on formal proof verification or manual review, both of which are expensive to scale. Using this platform, we find two problems for which GPT 5.4 Pro proposes solutions that improve on the best-known published results, representing potential novel contributions (pending expert review). We release HorizonMath as an open challenge and a growing community resource, where correct solutions to problems in the unsolved problem classes could constitute novel results in the mathematical literature.
Problem

Research questions and friction points this paper is trying to address.

AI progress
mathematical discovery
unsolved problems
automated verification
computational mathematics
Innovation

Methods, ideas, or system contributions that make the work stand out.

HorizonMath
automatic verification
unsolved mathematical problems
AI-driven discovery
scalable evaluation framework
🔎 Similar Papers
No similar papers found.