Hybrid Models for Natural Language Reasoning: The Case of Syllogistic Logic

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit a critical deficiency in logical generalization—specifically, conflating compositionality (the ability to abstract and combine atomic rules) with recursivity (the iterative application of rules)—as revealed by systematic evaluation on syllogistic reasoning. While LLMs demonstrate moderate recursive capability, their compositional generalization remains severely limited. Method: We propose a neuro-symbolic hybrid architecture that integrates a pretrained LLM as a natural-language interface with a formal symbolic reasoning engine. A lightweight neural module dynamically selects and instantiates symbolic rules, explicitly decoupling and coordinating compositionality and recursivity. Contribution/Results: The hybrid model achieves significant improvements in logical completeness and out-of-distribution robustness while maintaining high inference efficiency. Crucially, it attains reliable, interpretable logical deduction using only a small neural component—demonstrating that principled integration of neural and symbolic paradigms can effectively bridge fundamental gaps in LLM reasoning.

Technology Category

Application Category

📝 Abstract
Despite the remarkable progress in neural models, their ability to generalize, a cornerstone for applications like logical reasoning, remains a critical challenge. We delineate two fundamental aspects of this ability: compositionality, the capacity to abstract atomic logical rules underlying complex inferences, and recursiveness, the aptitude to build intricate representations through iterative application of inference rules. In the literature, these two aspects are often confounded together under the umbrella term of generalization. To sharpen this distinction, we investigated the logical generalization capabilities of pre-trained large language models (LLMs) using the syllogistic fragment as a benchmark for natural language reasoning. Though simple, this fragment provides a foundational yet expressive subset of formal logic that supports controlled evaluation of essential reasoning abilities. Our findings reveal a significant disparity: while LLMs demonstrate reasonable proficiency in recursiveness, they struggle with compositionality. To overcome these limitations and establish a reliable logical prover, we propose a hybrid architecture integrating symbolic reasoning with neural computation. This synergistic interaction enables robust and efficient inference, neural components accelerate processing, while symbolic reasoning ensures completeness. Our experiments show that high efficiency is preserved even with relatively small neural components. As part of our proposed methodology, this analysis gives a rationale and highlights the potential of hybrid models to effectively address key generalization barriers in neural reasoning systems.
Problem

Research questions and friction points this paper is trying to address.

LLMs struggle with compositional generalization in logical reasoning
Hybrid models integrate symbolic and neural approaches for robust inference
Syllogistic logic benchmark reveals neural models' generalization limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid architecture integrates symbolic with neural reasoning
Neural components accelerate processing for efficiency
Symbolic reasoning ensures completeness of logical inference
🔎 Similar Papers
No similar papers found.