Instantiation-based Formalization of Logical Reasoning Tasks using Language Models and Logical Solvers

📅 2025-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient robustness of large language models (LLMs) in logical reasoning and their imprecise translation from natural language to formal logic, this paper proposes Semantic Self-Verification (SSV), a novel framework that drives abstraction through instance consistency and introduces the paradigm of *near-deterministic reasoning* for high-confidence logical deduction. Our method integrates LLM-based generation, formal logic solver verification, instance-consistency constraints, and abstraction generalization. Evaluated on open logical reasoning benchmarks, SSV substantially outperforms existing state-of-the-art approaches: verification accuracy approaches perfection, and reliance on manual validation decreases by over 70%. Key contributions are: (1) the first self-verifying framework to embed instance consistency directly into the formalization process; (2) the establishment of near-deterministic reasoning as a new principled paradigm for reliable logical inference; and (3) significant improvements in the reliability and autonomy of AI reasoning systems.

Technology Category

Application Category

📝 Abstract
Robustness of reasoning remains a significant challenge for large language models, and addressing it is essential for the practical applicability of AI-driven reasoning systems. We introduce Semantic Self-Verification (SSV), a novel approach that addresses the key challenge in combining language models with the rigor of logical solvers: to accurately formulate the reasoning problem from natural language to the formal language of the solver. SSV uses a consistency-based approach to produce strong abstract formalizations of problems using concrete instantiations that are generated by the model and verified by the solver. In addition to significantly advancing the overall reasoning accuracy over the state-of-the-art, a key novelty that this approach presents is a feature of verification that has near-perfect precision over a significant coverage of cases, as we demonstrate on open reasoning benchmarks. We propose such *near-certain reasoning* as a new approach to reduce the need for manual verification in many cases, taking us closer to more dependable and autonomous AI reasoning systems.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Stability and Accuracy
AI Inference Systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic Self-Validation
Formal Language Translation
Enhanced AI Reasoning
🔎 Similar Papers
No similar papers found.