From Hypothesis to Premises: LLM-based Backward Logical Reasoning with Selective Symbolic Translation

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) rely on forward-stepwise reasoning, suffering from path redundancy, hallucinated intermediate steps, and semantic drift—leading to low logical reasoning efficiency and poor reliability. To address this, we propose Hypothesis-Driven Backward Logical Reasoning (HBLR), a novel framework that initiates inference from the conclusion and proceeds backward. HBLR first employs a confidence-aware symbolic translation mechanism to selectively convert natural language premises into first-order logic representations. It then performs backward deduction while incorporating a dual-reflection module—comprising translation reflection (ensuring semantic fidelity) and reasoning reflection (guaranteeing logical consistency)—to jointly model natural language and formal logic. Evaluated on five logical reasoning benchmarks, HBLR significantly outperforms strong baselines, achieving state-of-the-art accuracy and minimal inference steps. These results empirically validate the effectiveness of hypothesis-driven backward reasoning in enhancing both the robustness and efficiency of LLM-based logical inference.

Technology Category

Application Category

📝 Abstract
Logical reasoning is a core challenge in natural language understanding and a fundamental capability of artificial intelligence, underpinning scientific discovery, mathematical theorem proving, and complex decision-making. Despite the remarkable progress of large language models (LLMs), most current approaches still rely on forward reasoning paradigms, generating step-by-step rationales from premises to conclusions. However, such methods often suffer from redundant inference paths, hallucinated steps, and semantic drift, resulting in inefficient and unreliable reasoning. In this paper, we propose a novel framework, Hypothesis-driven Backward Logical Reasoning (HBLR). The core idea is to integrate confidence-aware symbolic translation with hypothesis-driven backward reasoning. In the translation phase, only high-confidence spans are converted into logical form, such as First-Order Logic (FOL), while uncertain content remains in natural language. A translation reflection module further ensures semantic fidelity by evaluating symbolic outputs and reverting lossy ones back to text when necessary. In the reasoning phase, HBLR simulates human deductive thinking by assuming the conclusion is true and recursively verifying its premises. A reasoning reflection module further identifies and corrects flawed inference steps, enhancing logical coherence. Extensive experiments on five reasoning benchmarks demonstrate that HBLR consistently outperforms strong baselines in both accuracy and efficiency.
Problem

Research questions and friction points this paper is trying to address.

Improves logical reasoning accuracy by backward verification
Reduces redundant steps and hallucinations in inference
Enhances efficiency with selective symbolic translation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Backward reasoning from hypothesis to premises
Selective symbolic translation of high-confidence spans
Reflection modules for semantic fidelity and logical coherence
🔎 Similar Papers
No similar papers found.
Q
Qingchuan Li
University of Science and Technology of China
M
Mingyue Cheng
University of Science and Technology of China
Zirui Liu
Zirui Liu
Peking University
SystemsAlgorithmsData Structures
D
Daoyu Wang
University of Science and Technology of China
Y
Yuting Zeng
University of Science and Technology of China
Tongxuan Liu
Tongxuan Liu
University of Science and Technology of China
LLM Logic ReasoningMulti-AgentsLLM Inference SystemLVLMRecommender System