Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models

📅 2024-09-26
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently exhibit inconsistencies between reasoning chains and final conclusions, as well as incomplete extraction of logical information in complex logical reasoning tasks. To address these issues, we propose Logic-of-Thought (LoT), a novel prompting framework that introduces propositionally grounded, expanded logical descriptions as context to ensure logical completeness—constituting the first such integration of formal logic into prompting. LoT is orthogonal to existing paradigms and seamlessly integrates with Chain-of-Thought (CoT), Self-Consistency, and Tree-of-Thoughts (ToT). Its methodology comprises three core components: formal logical modeling, automated expansion of logical expressions, and multi-strategy orchestration. Evaluated on five rigorous benchmarks—including ReClor, RuleTaker, and ProofWriter—LoT achieves consistent gains: +4.35% accuracy over CoT on ReClor and +8.0% over ToT on ProofWriter.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities across various tasks but their performance in complex logical reasoning tasks remains unsatisfactory. Although some prompting methods, such as Chain-of-Thought, can improve the reasoning ability of LLMs to some extent, they suffer from an unfaithful issue where derived conclusions may not align with the generated reasoning chain. To address this issue, some studies employ the approach of propositional logic to further enhance logical reasoning abilities of LLMs. However, the potential omissions in the extraction of logical expressions in these methods can cause information loss in the logical reasoning process, thereby generating incorrect results. To this end, we propose Logic-of-Thought (LoT) prompting which employs propositional logic to generate expanded logical information descriptions and utilizes them as an additional augmentation to original contexts, thereby ensuring information completeness and enhancing logical reasoning ability. LoT is orthogonal to existing prompting methods and can be seamlessly integrated with them. Extensive experiments demonstrate that LoT boosts the performance of various prompting methods with a striking margin across five logical reasoning tasks. In particular, LoT enhances Chain-of-Thought's performance on the ReClor dataset by +4.35%, improves Chain-of-Thought with Self-Consistency's performance on the RuleTaker dataset by +3.52%, and boosts performance of Tree-of-Thoughts on the ProofWriter dataset by +8%.
Problem

Research questions and friction points this paper is trying to address.

Enhance logical reasoning in LLMs
Address unfaithful reasoning chains
Ensure completeness in logical information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates propositional logic for reasoning
Augments context with logical information
Enhances existing prompting methods performance
🔎 Similar Papers
No similar papers found.
Tongxuan Liu
Tongxuan Liu
University of Science and Technology of China
LLM Logic ReasoningMulti-AgentsLLM Inference SystemLVLMRecommender System
W
Wenjiang Xu
Institute of Automation, Chinese Academy of Sciences
W
Weizhe Huang
University of Science and Technology of China
Xingyu Wang
Xingyu Wang
Nanjing University of Posts and Telecommunications
NLP
J
Jiaxing Wang
JD.com
H
Hailong Yang
Beihang University
J
Jing Li
University of Science and Technology of China