Safe: Enhancing Mathematical Reasoning in Large Language Models via Retrospective Step-aware Formal Verification

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently generate hallucinated, undetectable reasoning chains in mathematical problem-solving. Method: This paper introduces a backtracking, step-aware formal verification framework that translates natural-language chain-of-thought (CoT) steps into Lean 4—a theorem prover’s formal language—to produce machine-checkable proofs. Contribution/Results: It is the first work to systematically apply Lean 4 for automated, end-to-end verification of LLM-generated reasoning. The framework establishes an interpretable, step-level verification paradigm and introduces FormalStep, the first fine-grained benchmark for step correctness (30,809 formal propositions). Integrated with CoT enhancement and multi-model ensemble evaluation, it achieves significant accuracy improvements across multiple mathematical reasoning benchmarks, delivering fully transparent, reproducible, and formally verified evidence for each inference step.

Technology Category

Application Category

📝 Abstract
Chain-of-Thought (CoT) prompting has become the de facto method to elicit reasoning capabilities from large language models (LLMs). However, to mitigate hallucinations in CoT that are notoriously difficult to detect, current methods such as process reward models (PRMs) or self-consistency operate as opaque boxes and do not provide checkable evidence for their judgments, possibly limiting their effectiveness. To address this issue, we draw inspiration from the idea that"the gold standard for supporting a mathematical claim is to provide a proof". We propose a retrospective, step-aware formal verification framework $Safe$. Rather than assigning arbitrary scores, we strive to articulate mathematical claims in formal mathematical language Lean 4 at each reasoning step and provide formal proofs to identify hallucinations. We evaluate our framework $Safe$ across multiple language models and various mathematical datasets, demonstrating a significant performance improvement while offering interpretable and verifiable evidence. We also propose $FormalStep$ as a benchmark for step correctness theorem proving with $30,809$ formal statements. To the best of our knowledge, our work represents the first endeavor to utilize formal mathematical language Lean 4 for verifying natural language content generated by LLMs, aligning with the reason why formal mathematical languages were created in the first place: to provide a robust foundation for hallucination-prone human-written proofs.
Problem

Research questions and friction points this paper is trying to address.

Detect hallucinations in Chain-of-Thought reasoning in LLMs
Provide verifiable evidence for mathematical claims in LLMs
Improve interpretability and correctness of mathematical reasoning in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrospective step-aware formal verification framework
Uses Lean 4 for formal mathematical proofs
Introduces FormalStep benchmark for theorem proving
🔎 Similar Papers
No similar papers found.
C
Chengwu Liu
School of Computer Science, National Key Laboratory for Multimedia Information Processing, PKU-Anker LLM Lab, Peking University
Yichun Yin
Yichun Yin
Noah's Ark Lab, Huawei
LLM
Y
Yan Xu
Huawei Noah’s Ark Lab
X
Xin Xu
The Hong Kong University of Science and Technology
Zaoyu Chen
Zaoyu Chen
The Hong Kong Polytechnic University
Code GenerationSmart ContractNLPAI
Yasheng Wang
Yasheng Wang
Tencent
Natural Language Processing
Lifeng Shang
Lifeng Shang
Huawei Noah's Ark Lab
Machine LearningComputer VisionPattern ReconitionNatural Language Processing
Q
Qun Liu
Huawei Noah’s Ark Lab
M
Ming Zhang
School of Computer Science, National Key Laboratory for Multimedia Information Processing, PKU-Anker LLM Lab, Peking University