Goedel-Prover-V2: Scaling Formal Theorem Proving with Scaffolded Data Synthesis and Self-Correction

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the low proof success rates and poor generalization of language models in automated formal theorem proving. To overcome these limitations, we propose three key techniques: (1) a scaffolded data generation framework that constructs structured synthetic proofs; (2) a verifier-guided self-correction mechanism leveraging Lean compiler feedback to drive reinforcement learning and expert iteration; and (3) a multi-stage model averaging and fusion strategy. Our open-source model, Goedel-Prover-V2-32B, achieves 90.4% pass@32 on MiniF2F and solves 86 problems on PutnamBench—substantially outperforming leading open-source systems and matching the performance of several proprietary models. These advances establish a new, efficient, and reproducible paradigm for verifiable mathematical reasoning.

Technology Category

Application Category

📝 Abstract
We introduce Goedel-Prover-V2, a series of open-source language models that set a new state-of-the-art in automated theorem proving. Built on the standard expert iteration and reinforcement learning pipeline, our approach incorporates three key innovations: (1) Scaffolded data synthesis: We generate synthetic tasks of increasing difficulty to train the model to master increasingly complex theorems; (2) Verifier-guided self-correction: We enable the model to iteratively revise its proofs by leveraging feedback from the Lean compiler; (3) Model averaging: We merge model checkpoints to mitigate the decrease in model output diversity in later stages of training. Our small model, Goedel-Prover-V2-8B, reaches 84.6% pass@32 on MiniF2F and outperforms DeepSeek-Prover-V2-671B under the same metric, despite being 80X smaller. Our flagship model, Goedel-Prover-V2-32B, achieves 88.1% on MiniF2F at pass@32 in standard mode and 90.4% in self-correction mode, outperforming prior SOTA by a large margin. Additionally, our flagship model solves 86 problems on PutnamBench at pass@184, securing the first place among open-source models on the leaderboard, surpassing DeepSeek-Prover-V2-671B's record of solving 47 problems by pass@1024 with a significantly smaller model size and compute budget. At the time of its release (July-August 2025), Goedel-Prover-V2 achieves the strongest overall performance among all open-source theorem provers. It also ranks among the top-performing models--including closed-source systems with publicly reported performance--under a constrained test-time compute budget. Our models, code, and data are released at https://github.com/Goedel-LM/Goedel-Prover-V2.
Problem

Research questions and friction points this paper is trying to address.

Scaling automated theorem proving with scaffolded data synthesis
Improving proof accuracy via verifier-guided self-correction
Enhancing model diversity through checkpoint averaging
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scaffolded data synthesis for complex theorems
Verifier-guided self-correction via Lean compiler
Model averaging to enhance output diversity
🔎 Similar Papers
No similar papers found.