From Informal to Formal -- Incorporating and Evaluating LLMs on Natural Language Requirements to Verifiable Formal Proofs

📅 2025-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) face challenges in formal mathematical verification—including capability coupling, coarse-grained evaluation, and scarcity of high-quality, language-diverse training data. Method: We systematically decouple formal verification into six fine-grained subtasks (e.g., specification translation, proof completion) and construct FM-alpaca, a 18K-sample high-quality instruction-response dataset covering five mainstream formal languages: Coq, Lean4, Dafny, ACSL, and TLA+. Leveraging GPT-4o distillation and supervised fine-tuning (SFT), we propose FM-Bench—the first cross-language, task-decoupled benchmark for formal verification. Contribution/Results: Empirical results show that fine-tuning on formalization data significantly improves formal verification performance (up to 2.9× gain) and positively transfers to mathematical reasoning and programming tasks. Both the model and benchmark are publicly released.

Technology Category

Application Category

📝 Abstract
The research in AI-based formal mathematical reasoning has shown an unstoppable growth trend. These studies have excelled in mathematical competitions like IMO, showing significant progress. However, these studies intertwined multiple skills simultaneously, i.e., problem-solving, reasoning, and writing formal specifications, making it hard to precisely identify the LLMs' strengths and weaknesses in each task. This paper focuses on formal verification, an immediate application scenario of formal reasoning, and decomposes it into six sub-tasks. We constructed 18k high-quality instruction-response pairs across five mainstream formal specification languages (Coq, Lean4, Dafny, ACSL, and TLA+) in six formal-verification-related tasks by distilling GPT-4o. They are split into a 14k+ fine-tuning dataset FM-alpaca and a 4k benchmark FM-Bench. We found that LLMs are good at writing proof segments when given either the code, or the detailed description of proof steps. Also, the fine-tuning brought about a nearly threefold improvement at most. Interestingly, we observed that fine-tuning with formal data also enhances mathematics, reasoning, and coding abilities. We hope our findings inspire further research. Fine-tuned models are released to facilitate subsequent studies
Problem

Research questions and friction points this paper is trying to address.

AI Performance
Formal Verification
Mathematical Proof
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPT-4
Formal Verification in Mathematics
Enhanced AI Capabilities
🔎 Similar Papers
No similar papers found.