🤖 AI Summary
To address the challenge of neural theorem provers failing to sustainably generate correct proofs in fully autonomous mode, this paper proposes a human-in-the-loop formal theorem proving framework for Lean. Methodologically, it enables native execution of large language models (LLMs) within Lean—supporting both local and cloud-based models via a plugin-architected integration—and adopts a human-led, model-assisted paradigm featuring lightweight interactive capabilities: step-wise suggestions, goal completion, and premise selection. Technically, it unifies Lean’s plugin infrastructure, an extensible LLM inference engine (CPU/GPU/cloud-compatible), formal mathematics fine-tuning, and a real-time proof-state interaction interface. Experiments on the *Mathematics in Lean* dataset show that human–AI collaboration requires only 2.08 average manual interventions per proof (outperforming aesop’s 3.86), while achieving a 74.2% fully automated step-wise success rate—a 85% improvement over baseline. All code and models are released under the MIT License.
📝 Abstract
Neural theorem proving combines large language models (LLMs) with proof assistants such as Lean, where the correctness of formal proofs can be rigorously verified, leaving no room for hallucination. With existing neural theorem provers pretrained on a fixed collection of data and offering valuable suggestions at times, it is challenging for them to continually prove novel theorems in a fully autonomous mode, where human insights may be critical. In this paper, we explore LLMs as copilots that assist humans in proving theorems. We introduce Lean Copilot, an general framework for running LLM inference natively in Lean. It enables programmers to build various LLM-based proof automation tools that integrate seamlessly into the workflow of Lean users. Lean users can use our pretrained models or bring their own ones that run either locally (with or without GPUs) or on the cloud. Using Lean Copilot, we build LLM-based tools that suggest proof steps, complete proof goals, and select relevant premises. Experimental results on the Mathematics in Lean textbook demonstrate the effectiveness of our method compared to existing rule-based proof automation in Lean (aesop). When assisting humans, Lean Copilot requires only 2.08 manually-entered proof steps on average (3.86 required by aesop); when automating the theorem proving process, Lean Copilot automates 74.2% proof steps on average, 85% better than aesop (40.1%). We open source all code and artifacts under a permissive MIT license to facilitate further research.