Lean Copilot: Large Language Models as Copilots for Theorem Proving in Lean

📅 2024-04-18
📈 Citations: 27
Influential: 1
📄 PDF
🤖 AI Summary
To address the challenge of neural theorem provers failing to sustainably generate correct proofs in fully autonomous mode, this paper proposes a human-in-the-loop formal theorem proving framework for Lean. Methodologically, it enables native execution of large language models (LLMs) within Lean—supporting both local and cloud-based models via a plugin-architected integration—and adopts a human-led, model-assisted paradigm featuring lightweight interactive capabilities: step-wise suggestions, goal completion, and premise selection. Technically, it unifies Lean’s plugin infrastructure, an extensible LLM inference engine (CPU/GPU/cloud-compatible), formal mathematics fine-tuning, and a real-time proof-state interaction interface. Experiments on the *Mathematics in Lean* dataset show that human–AI collaboration requires only 2.08 average manual interventions per proof (outperforming aesop’s 3.86), while achieving a 74.2% fully automated step-wise success rate—a 85% improvement over baseline. All code and models are released under the MIT License.

Technology Category

Application Category

📝 Abstract
Neural theorem proving combines large language models (LLMs) with proof assistants such as Lean, where the correctness of formal proofs can be rigorously verified, leaving no room for hallucination. With existing neural theorem provers pretrained on a fixed collection of data and offering valuable suggestions at times, it is challenging for them to continually prove novel theorems in a fully autonomous mode, where human insights may be critical. In this paper, we explore LLMs as copilots that assist humans in proving theorems. We introduce Lean Copilot, an general framework for running LLM inference natively in Lean. It enables programmers to build various LLM-based proof automation tools that integrate seamlessly into the workflow of Lean users. Lean users can use our pretrained models or bring their own ones that run either locally (with or without GPUs) or on the cloud. Using Lean Copilot, we build LLM-based tools that suggest proof steps, complete proof goals, and select relevant premises. Experimental results on the Mathematics in Lean textbook demonstrate the effectiveness of our method compared to existing rule-based proof automation in Lean (aesop). When assisting humans, Lean Copilot requires only 2.08 manually-entered proof steps on average (3.86 required by aesop); when automating the theorem proving process, Lean Copilot automates 74.2% proof steps on average, 85% better than aesop (40.1%). We open source all code and artifacts under a permissive MIT license to facilitate further research.
Problem

Research questions and friction points this paper is trying to address.

Enhance theorem proving with LLMs as human copilots.
Develop Lean Copilot for seamless LLM integration in Lean.
Improve proof automation efficiency and reduce manual steps.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs integrated with Lean for theorem proving
Lean Copilot framework enables native LLM inference
Tools suggest steps, complete goals, select premises
🔎 Similar Papers
No similar papers found.
P
Peiyang Song
UC Santa Barbara, U.S.A., California Institute of Technology, U.S.A.
Kaiyu Yang
Kaiyu Yang
Meta FAIR
machine learningautomated reasoningneural theorem provingneuro-symbolic AI
A
A. Anandkumar
California Institute of Technology, U.S.A.