🤖 AI Summary
Existing formal theorem-proving models suffer from inefficient training and poor generalization due to reliance on static, fixed training datasets. To address this, we propose GAR (Generative Adversarial Reinforcement learning), the first framework that jointly optimizes problem generation and proof synthesis via adversarial co-evolution between a Composer (problem generator) and a Solver (theorem prover), guided by verifiable environmental feedback. GAR incorporates an implicit curriculum learning mechanism that dynamically aligns task difficulty with model capability, enabling multi-granularity policy exploration and self-play. Evaluated on MiniF2F-Test, Goedel-Prover-V2-8B and DeepSeek-Prover-V2-7B achieve average improvements of +4.20% in pass@32; on ProofNet-Test, DeepSeek-Prover-V2’s pass@32 rises from 22.58% to 25.81%. These results demonstrate a significant breakthrough over conventional static-data training paradigms.
📝 Abstract
Solving math problems through verifiable languages such as Lean has significantly impacted both the mathematics and computer science communities. Current state-of-the-art models are often trained with expensive online Reinforcement Learning (RL) or expert iteration. However, these approaches rely on fixed problem sets, which causes inefficient training and limits the model to tackle complex problems. To overcome these limitations, we propose GAR: Generative Adversarial Reinforcement learning, a comprehensive RL training framework that jointly trains the problem composer and solver in an adversarial loop. GAR introduces an implicit curriculum learning mechanism, which aligns task difficulty with the prover's evolving capability. It thereby improves the training efficiency and enables stronger performance of proving advanced theorems. Experiments show that with GAR training, Goedel-Prover-V2-8B and DeepSeek-Prover-V2-7B achieve an average relative improvement in pass@32 of 4.20% on MiniF2F-Test benchmark, while DeepSeek-Prover-V2's pass@32 on ProofNet-Test increases from 22.58% to 25.81%. Beyond formal proving, GAR establishes a general RL paradigm for co-evolution of problem generation and solving under verifiable environments.