Beyond Correctness: Exposing LLM-generated Logical Flaws in Reasoning via Multi-step Automated Theorem Proving

📅 2025-12-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently exhibit subtle logical fallacies in multi-step reasoning for high-stakes domains (e.g., healthcare, law), where surface-level linguistic fluency masks underlying inconsistencies; existing fact-checking, self-consistency, and rule-based validation methods fail to detect complex chained errors. Method: We propose Multi-step Automated Theorem Proving (MATP), the first framework to systematically translate LLM reasoning traces into first-order logic (FOL) via semantic parsing and stepwise formal modeling, then verify logical validity using integrated automated theorem provers (e.g., Vampire, E-Prover). Contribution/Results: MATP transcends coarse-grained consistency checks by enabling fine-grained error localization and classification. Evaluated on 10,830 cross-task samples, it achieves a 42.3-percentage-point improvement in step-level verification accuracy over prompt-engineering baselines, empirically validating the efficacy and superiority of reasoning-specific formal evaluation paradigms.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated impressive reasoning capabilities, leading to their adoption in high-stakes domains such as healthcare, law, and scientific research. However, their reasoning often contains subtle logical errors masked by fluent language, posing significant risks for critical applications. While existing approaches like fact-checking, self-consistency methods, and rule-based validation provide partial solutions, they fail to detect complex logical flaws in multi-step reasoning. To overcome these challenges, we present MATP, an evaluation framework for systematically verifying LLM reasoning via Multi-step Automatic Theorem Proving. MATP translates natural language reasoning into First-Order Logic (FOL) and applies automated theorem provers to assess step-by-step logical validity. This approach identifies hidden logical errors and provides fine-grained classifications of reasoning correctness. Evaluations on a benchmark comprising 10,830 reasoning instances generated by 10 LLMs across tasks from PrOntoQA-OOD, ProofWriter, and FOLIO show that MATP surpasses prompting-based baselines by over 42 percentage points in reasoning step verification. It further reveals model-level disparities, with reasoning models generating more logically coherent outputs than general models. These results demonstrate MATP's potential to enhance the trustworthiness of LLM-generated reasoning.
Problem

Research questions and friction points this paper is trying to address.

Detects subtle logical errors in LLM multi-step reasoning
Translates natural language reasoning into formal logic for verification
Identifies model-level disparities in logical coherence across LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Translates reasoning into First-Order Logic for verification
Uses automated theorem provers to assess stepwise logical validity
Identifies hidden errors and classifies reasoning correctness granularly
🔎 Similar Papers
No similar papers found.
Xinyi Zheng
Xinyi Zheng
PhD in Computer Science, University of Bristol
Ningke Li
Ningke Li
National University of Singapore
X
Xiaokun Luan
Peking University, Beijing, China
K
Kailong Wang
Huazhong University of Science and Technology, Wuhan, China
L
Ling Shi
Nanyang Technological University, Singapore
Meng Sun
Meng Sun
Professor, School of Mathematical Science, Peking University
software theoryformal methodscyber-physical systemscoalgebra theorytrustworthy AI
H
Haoyu Wang
Huazhong University of Science and Technology, Wuhan, China