TaoBench: Do Automated Theorem Prover LLMs Generalize Beyond MathLib?

πŸ“… 2026-03-13
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current automated theorem proving systems perform well on standard libraries such as MathLib but exhibit significantly limited generalization when confronted with non-standard, ground-up mathematical definitions. This work introduces TaoBench, a benchmark built upon Terence Tao’s axiomatic framework in *Analysis I*, which constructs theorem-proving tasks independent of MathLib and automatically generates compilable, self-contained Lean environments for each problem. To isolate the impact of definitional frameworks, each task is paired with a semantically equivalent MathLib version. This paired design enables the first systematic evaluation of model generalization across distinct foundational setups. Experiments reveal that state-of-the-art models suffer an average 26% drop in success rate on TaoBench compared to their MathLib counterparts, indicating that their performance bottleneck stems from over-reliance on specific library conventions rather than intrinsic task difficulty, thereby highlighting a critical gap between existing benchmarks and the demands of real-world mathematical research.

Technology Category

Application Category

πŸ“ Abstract
Automated theorem proving (ATP) benchmarks largely consist of problems formalized in MathLib, so current ATP training and evaluation are heavily biased toward MathLib's definitional framework. However, frontier mathematics is often exploratory and prototype-heavy, relying on bespoke constructions that deviate from standard libraries. In this work, we evaluate the robustness of current ATP systems when applied to a novel definitional framework, specifically examining the performance gap between standard library problems and bespoke mathematical constructions. We introduce TaoBench, an undergraduate-level benchmark derived from Terence Tao's Analysis I, which formalizes analysis by constructing core mathematical concepts from scratch, without relying on standard Mathlib definitions, as well as by mixing from-scratch and MathLib constructions. For fair evaluation, we build an agentic pipeline that automatically extracts a compilable, self-contained local environment for each problem. To isolate the effect of definitional frameworks, we additionally translate every problem into a mathematically equivalent Mathlib formulation, yielding paired TaoBench-Mathlib statements for direct comparison. While state-of-the-art ATP models perform capably within the MathLib framework, performance drops by an average of roughly 26% on the definitionally equivalent Tao formulation. This indicates that the main bottleneck is limited generalization across definitional frameworks rather than task difficulty. TaoBench thus highlights a gap between benchmark performance and applicability, and provides a concrete foundation for developing and testing provers better aligned with research mathematics.
Problem

Research questions and friction points this paper is trying to address.

automated theorem proving
generalization
definitional framework
MathLib
benchmark
Innovation

Methods, ideas, or system contributions that make the work stand out.

automated theorem proving
generalization across definitional frameworks
TaoBench
MathLib
benchmarking
πŸ”Ž Similar Papers
No similar papers found.