ProofOptimizer: Training Language Models to Simplify Proofs without Human Demonstrations

πŸ“… 2025-10-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Formal proofs are often excessively long and unintelligible, while labeled data for proof simplification is scarce and existing methods struggle to compress ultra-long proofs. Method: We propose the first end-to-end neural proof simplification framework that requires no human demonstrations. It integrates reinforcement learning with expert iteration, leveraging the Lean theorem prover as a differentiable reward signal to jointly optimize proof length and readability through iterative refinement. Contribution/Results: This work is the first to apply RL and expert iteration to proof compression, eliminating reliance on human annotations or expert demonstrations. Experiments on miniF2F, PutnamBench, and IMO 2025 demonstrate average proof-length reductions of 87%, 57%, and 49%, respectively, alongside improved verification efficiency and enhanced generalization performance of downstream models.

Technology Category

Application Category

πŸ“ Abstract
Neural theorem proving has advanced rapidly in the past year, reaching IMO gold-medalist capabilities and producing formal proofs that span thousands of lines. Although such proofs are mechanically verified by formal systems like Lean, their excessive length renders them difficult for humans to comprehend and limits their usefulness for mathematical insight. Proof simplification is therefore a critical bottleneck. Yet, training data for this task is scarce, and existing methods -- mainly agentic scaffolding with off-the-shelf LLMs -- struggle with the extremely long proofs generated by RL-trained provers. We introduce ProofOptimizer, the first language model trained to simplify Lean proofs without requiring additional human supervision. ProofOptimizer is trained via expert iteration and reinforcement learning, using Lean to verify simplifications and provide training signal. At inference time, it operates within an iterative proof-shortening workflow, progressively reducing proof length. Experiments show that ProofOptimizer substantially compresses proofs generated by state-of-the-art RL-trained provers on standard benchmarks, reducing proof length by 87% on miniF2F, 57% on PutnamBench, and 49% on Seed-Prover's IMO 2025 proofs. Beyond conciseness, the simplified proofs check faster in Lean and further improve downstream prover performance when reused as training data for supervised finetuning.
Problem

Research questions and friction points this paper is trying to address.

Simplifying excessively long formal proofs for human comprehension
Addressing scarce training data for proof simplification tasks
Reducing proof length while maintaining mechanical verification integrity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trains language models via reinforcement learning
Uses Lean verification for training signal
Iteratively shortens proofs without human supervision
πŸ”Ž Similar Papers
No similar papers found.