Lean Workbook: A large-scale Lean problem set formalized from natural language math problems

📅 2024-06-06
🏛️ Neural Information Processing Systems
📈 Citations: 41
Influential: 6
📄 PDF
🤖 AI Summary
Formal theorem proving is hindered by the scarcity of high-quality bilingual natural language–Lean 4 data. To address this, we propose the first bidirectional synthetic data construction framework tailored for mathematical theorem proving. Our method employs a large language model–driven iterative generation-and-filtering pipeline, integrating rule-guided filtering, mathematical semantic consistency verification, and proof-search feedback to ensure high-fidelity bidirectional translation between natural language and Lean 4. The resulting dataset introduces 21 newly curated International Mathematical Olympiad (IMO) problems and real-world forum proofs, and we publicly release an open-source dataset of 57K problem–proof pairs (on Hugging Face) alongside full implementation code (on GitHub). Experiments demonstrate substantial improvements in LLM performance across formalization translation, proposition understanding, and proof generation—establishing a foundational data resource for mathematical AI.

Technology Category

Application Category

📝 Abstract
Large language models have demonstrated impressive capabilities across various natural language processing tasks, especially in solving mathematical problems. However, large language models are not good at math theorem proving using formal languages like Lean. A significant challenge in this area is the scarcity of training data available in these formal languages. To address this issue, we propose a novel pipeline that iteratively generates and filters synthetic data to translate natural language mathematical problems into Lean 4 statements, and vice versa. Our results indicate that the synthetic data pipeline can provide useful training data and improve the performance of LLMs in translating and understanding complex mathematical problems and proofs. Our final dataset contains about 57K formal-informal question pairs along with searched proof from the math contest forum and 21 new IMO questions. We open-source our code at https://github.com/InternLM/InternLM-Math and our data at https://huggingface.co/datasets/InternLM/Lean-Workbook.
Problem

Research questions and friction points this paper is trying to address.

Translate natural language math problems into Lean 4 statements
Generate synthetic training data for formal theorem proving
Improve LLM performance in understanding mathematical proofs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates synthetic Lean 4 data from math problems
Filters data iteratively for quality improvement
Creates formal-informal question pairs dataset
🔎 Similar Papers
No similar papers found.