🤖 AI Summary
Large language models (LLMs) exhibit poor performance in multi-step first-order logic (FOL) theorem proving—e.g., Deepseek-Prover-V2-7B achieves only 4.2% accuracy on our newly constructed Lean 4 benchmark of 447 problems—primarily due to monolithic proof strategies and irreversible error propagation in early reasoning steps.
Method: We propose DREAM, a novel framework integrating axiom-driven adaptive strategy diversification with sub-propositional error identification and reflective regeneration, enabling fine-grained error correction and robust, cooperative reasoning. Built atop the Lean 4 formal system, DREAM unifies prompt engineering, dynamic strategy control, and hierarchical feedback.
Contribution/Results: On our benchmark, DREAM improves accuracy by 0.6–6.4 percentage points over strong baselines. It establishes the first dedicated, scalable evaluation benchmark for multi-step FOL theorem proving and introduces a generalizable methodological paradigm for formal reasoning with LLMs.
📝 Abstract
Large language models (LLMs) have shown promising first-order logic (FOL) reasoning capabilities with applications in various areas. However, their effectiveness in complex mathematical reasoning involving multi-step FOL deductions is still under-researched. While LLMs perform competitively on established mathematical reasoning benchmarks, they struggle with multi-step FOL tasks, as demonstrated by Deepseek-Prover-V2-7B's low accuracy (4.2%) on our proposed theorem proving dataset. This issue arises from the limited exploration of diverse proof strategies and the potential for early reasoning mistakes to undermine entire proofs. To address these issues, we propose DREAM, a self-adaptive solution that enhances the Diversity and REAsonability of LLMs'generation strategies. DREAM incorporates an Axiom-Driven Strategy Diversification mechanism to promote varied strategic outcomes and a Sub-Proposition Error Feedback to help LLMs reflect on and correct their proofs. Our contributions include pioneering advancements in LLMs'mathematical reasoning through FOL theorem proving, introducing a novel inference stage solution that improves performance by 0.6% to 6.4%, and providing a curated dataset of 447 mathematical theorems in Lean 4 format for evaluation.