🤖 AI Summary
Formal theorem proving suffers from a scarcity of university-level mathematical training data. Method: This paper proposes a hybrid conjecture generation framework that integrates rule-based context extraction from the Mathlib library with large language model (LLM) generation capabilities, leveraging the Lean 4 formal system, the aesop automated reasoning tactic, and GRPO—a reinforcement learning algorithm—for proof-guidance optimization. The framework implements an iterative generate–evaluate–filter loop. Contribution/Results: From 40 seed files, it produces 12,289 conjectures, of which 3,776 are syntactically valid and nontrivial (averaging 103.25 high-quality conjectures per seed). Empirical evaluation successfully verifies multiple nontrivial topological theorems—including properties of half-open sets, α-open sets, and preopen sets—demonstrating substantial improvements in scalability and practicality for formal mathematical knowledge discovery.
📝 Abstract
We introduce LeanConjecturer, a pipeline for automatically generating university-level mathematical conjectures in Lean 4 using Large Language Models (LLMs). Our hybrid approach combines rule-based context extraction with LLM-based theorem statement generation, addressing the data scarcity challenge in formal theorem proving. Through iterative generation and evaluation, LeanConjecturer produced 12,289 conjectures from 40 Mathlib seed files, with 3,776 identified as syntactically valid and non-trivial, that is, cannot be proven by exttt{aesop} tactic. We demonstrate the utility of these generated conjectures for reinforcement learning through Group Relative Policy Optimization (GRPO), showing that targeted training on domain-specific conjectures can enhance theorem proving capabilities. Our approach generates 103.25 novel conjectures per seed file on average, providing a scalable solution for creating training data for theorem proving systems. Our system successfully verified several non-trivial theorems in topology, including properties of semi-open, alpha-open, and pre-open sets, demonstrating its potential for mathematical discovery beyond simple variations of existing results.