🤖 AI Summary
This work addresses the low accuracy and unreliable verification in automatic natural-language-to-Lean-4 formalization (autoformalization). We propose a two-stage optimization framework integrating scalable type checking with self-consistency filtering. Methodologically, we combine Lean 4’s native type checker, symbolic equivalence checking, large language model (LLM) generation and re-ranking, and incorporate self-consistent sampling to enhance output reliability. Our key contributions are: (1) the first Lean-4–oriented RLM25 mathematical dataset; (2) a corrected ProofNet benchmark and a new ProofNetVerif benchmark augmented with human-verified annotations; and (3) open-sourcing of all code, a novel symbolic equivalence tool, and three benchmarks. On ProofNet, our approach achieves an +18.4% absolute accuracy gain, significantly advancing the practicality of autoformalization for theorem proving.
📝 Abstract
Autoformalization, the automatic translation of unconstrained natural language into formal languages, has garnered significant attention due to its potential applications in theorem proving, formal verification, and LLM output checking. In this work, we analyze both current autoformalization methods and the processes used to evaluate them, focusing specifically on the Lean 4 theorem proving language. We demonstrate that scaling type-check filtering with self-consistency techniques on top of existing methods significantly improves performance, achieving absolute accuracy gains of up to +18.4% on ProofNet. To support reproducibility and further research, we release our code, including new symbolic equivalence for Lean formulas. We also release new benchmarks: a new research-level mathematics dataset RLM25, a corrected ProofNet, and ProofNetVerif with labeled correct and incorrect autoformalization pairs for evaluating metrics.