🤖 AI Summary
This work addresses the lack of concise and reproducible baseline systems in AI-driven automated theorem proving, which hinders fair architectural comparisons. To this end, we propose a minimalist yet competitive proof agent that integrates three core mechanisms: iterative proof refinement, theorem library retrieval, and context management. The system enables systematic evaluation of diverse large language models and design choices, achieving performance on par with state-of-the-art methods across multiple heterogeneous benchmarks. Our experiments demonstrate that iterative proof generation significantly outperforms single-pass generation, offering superior sample efficiency and reduced inference cost. The codebase is publicly released to provide the community with a standardized reference implementation.
📝 Abstract
We propose a minimal agentic baseline that enables systematic comparison across different AI-based theorem prover architectures. This design implements the core features shared among state-of-the-art systems: iterative proof refinement, library search and context management. We evaluate our baseline using qualitatively different benchmarks and compare various popular models and design choices, and demonstrate competitive performance compared to state-of-the-art approaches, while using a significantly simpler architecture. Our results demonstrate consistent advantages of an iterative approach over multiple single-shot generations, especially in terms of sample efficiency and cost effectiveness. The implementation is released open-source as a candidate reference for future research and as an accessible prover for the community.