🤖 AI Summary
This work proposes a novel paradigm termed “agent-based proof automation” to address the high cost of manually crafting lengthy formal proof scripts. In this approach, human experts supply key mathematical insights, while large language model (LLM) agents autonomously generate and iteratively refine proof scripts within the Lean 4 environment. Relying solely on off-the-shelf LLMs and lightweight verification tools, the method demonstrates— for the first time—the capacity for efficient, large-scale collaborative formal verification by LLM agents. Evaluated on the 14,000-line System Capless type safety proof, the system successfully completed 189 out of 217 tasks (87% success rate), with only 16% of the tasks requiring human intervention.
📝 Abstract
Proof engineering is notoriously labor-intensive: proofs that are straightforward on paper often require lengthy scripts in theorem provers. Recent advances in large language models (LLMs) create new opportunities for proof automation: modern LLMs not only generate proof scripts, but also support agentic behavior, exploring codebases and iteratively refining their outputs against prover feedback. These advances enable an emerging scheme where LLM-based agents undertake most proof engineering under human guidance. Humans provide mathematical insight (definitions, theorems, proof strategies); agents handle the mechanical work of proof development. We call this scheme agentic proof automation. We present this scheme through a case study: mechanizing the semantic type soundness of a sophisticated formal system, System Capless, in Lean 4, comprising over 14,000 lines of code. Using off-the-shelf LLM agents with a single lightweight proof-checking tool, the agents completed 189 proof engineering tasks with an 87% success rate, only 16% requiring human intervention. The case study demonstrates that agents are capable proof engineers that substantially boost productivity, though they fall short in creative reasoning and still require human guidance in certain cases. We release an interactive explorer where readers can examine all agent interactions; the mechanization is open-sourced for experiments and extensions.