SR-FoT: A Syllogistic-Reasoning Framework of Thought for Large Language Models Tackling Knowledge-based Reasoning Tasks

📅 2025-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) often deviate from logical reasoning paths and exhibit insufficient accuracy in knowledge-driven deductive reasoning, especially under chain-of-thought prompting. To address this, we propose SR-FoT—a multi-stage syllogistic reasoning framework grounded in formal logic. SR-FoT explicitly models a closed-loop three-phase process: major premise generation, minor premise matching, and conclusion derivation. It integrates semantic parsing, stepwise prompt engineering, and premise-guided generation to construct structured, interpretable reasoning chains—without fine-tuning or external knowledge bases. Evaluated on multiple knowledge-intensive reasoning benchmarks, SR-FoT significantly outperforms state-of-the-art methods including Chain-of-Thought and Self-Consistency. It achieves comprehensive improvements in reasoning accuracy, process interpretability, and robustness to input perturbations. To our knowledge, SR-FoT is the first framework to systematically embed the human syllogistic reasoning paradigm into LLM inference workflows.

Technology Category

Application Category

📝 Abstract
Deductive reasoning is a crucial logical capability that assists us in solving complex problems based on existing knowledge. Although augmented by Chain-of-Thought prompts, Large Language Models (LLMs) might not follow the correct reasoning paths. Enhancing the deductive reasoning abilities of LLMs, and leveraging their extensive built-in knowledge for various reasoning tasks, remains an open question. Attempting to mimic the human deductive reasoning paradigm, we propose a multi-stage Syllogistic-Reasoning Framework of Thought (SR-FoT) that enables LLMs to perform syllogistic deductive reasoning to handle complex knowledge-based reasoning tasks. Our SR-FoT begins by interpreting the question and then uses the interpretation and the original question to propose a suitable major premise. It proceeds by generating and answering minor premise questions in two stages to match the minor premises. Finally, it guides LLMs to use the previously generated major and minor premises to perform syllogistic deductive reasoning to derive the answer to the original question. Extensive and thorough experiments on knowledge-based reasoning tasks have demonstrated the effectiveness and advantages of our SR-FoT.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Knowledge Reasoning
Deductive Inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

SR-FoT
Knowledge Reasoning
Large Language Model (LLM)
🔎 Similar Papers
No similar papers found.
Wentao Wan
Wentao Wan
Sun Yat-sen University
Artificial IntelligenceCognitive AIDeep LearningNeural-SymbolicQuestion Answering
Z
Zhuojie Yang
School of Computer Science and Engineering, Sun Yat-sen University
Y
Yongcan Chen
South China Normal University
C
Chenglin Luo
School of Computer Science and Engineering, Sun Yat-sen University
R
Ruilin Wang
School of Computer Science and Engineering, Sun Yat-sen University
K
Kehao Cai
School of Computer Science and Engineering, Sun Yat-sen University
N
Nan Kang
School of Computer Science and Engineering, Sun Yat-sen University
Liang Lin
Liang Lin
Fellow of IEEE/IAPR, Professor of Computer Science, Sun Yat-sen University
Embodied AICausal Inference and LearningMultimodal Data Analysis
K
Keze Wang
School of Computer Science and Engineering, Sun Yat-sen University; Guangdong Key Laboratory of Big Data Analysis and Processing