Goedel-Prover: A Frontier Model for Open-Source Automated Theorem Proving

📅 2025-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-quality formalized mathematical data remains scarce, hindering progress in automated theorem proving. Method: This paper introduces an iterative proof data synthesis paradigm and presents Goedel-Prover, an open-source large language model. First, a statement formalizer is trained to generate 1.64 million Lean 4 theorems. Then, a multi-stage self-augmenting prover—integrating natural-language-to-Lean-4 translation, formal semantic consistency verification, and iterative supervised fine-tuning—systematically expands high-fidelity proof corpora. Contribution/Results: Goedel-Prover achieves 57.6% Pass@32 on miniF2F, setting a new state-of-the-art among open-source models. It solves 7 problems on PutnamBench—the current best performance. It contributes 29.7K new formal proofs to the Lean Workbook, nearly doubling its historical total. To our knowledge, this is the first large-scale, closed-loop, self-improving framework for automated formal proof data generation.

Technology Category

Application Category

📝 Abstract
We introduce Goedel-Prover, an open-source large language model (LLM) that achieves the state-of-the-art (SOTA) performance in automated formal proof generation for mathematical problems. The key challenge in this field is the scarcity of formalized math statements and proofs, which we tackle in the following ways. We train statement formalizers to translate the natural language math problems from Numina into formal language (Lean 4), creating a dataset of 1.64 million formal statements. LLMs are used to check that the formal statements accurately preserve the content of the original natural language problems. We then iteratively build a large dataset of formal proofs by training a series of provers. Each prover succeeds in proving many statements that the previous ones could not, and these new proofs are added to the training set for the next prover. The final prover outperforms all existing open-source models in whole-proof generation. On the miniF2F benchmark, it achieves a 57.6% success rate (Pass@32), exceeding the previous best open-source model by 7.6%. On PutnamBench, Goedel-Prover successfully solves 7 problems (Pass@512), ranking first on the leaderboard. Furthermore, it generates 29.7K formal proofs for Lean Workbook problems, nearly doubling the 15.7K produced by earlier works.
Problem

Research questions and friction points this paper is trying to address.

Develops an open-source LLM for theorem proving
Addresses scarcity of formalized math statements
Creates large dataset for training theorem provers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-source LLM for theorem proving
Iterative prover training with dataset expansion
State-of-the-art performance in formal proof generation
🔎 Similar Papers
No similar papers found.