Large Language Models as an Indirect Reasoner: Contrapositive and Contradiction for Automated Reasoning

📅 2024-02-06
🏛️ International Conference on Computational Linguistics
📈 Citations: 2
Influential: 1
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit limited capability in indirect reasoning tasks—such as proof by contradiction and contrapositive inference—due to their reliance on direct, surface-level pattern matching. Method: This paper proposes the Direct-Indirect Reasoning (DIR) framework, the first systematic approach to explicitly incorporate indirect reasoning into LLMs. DIR introduces principled prompt templates grounded in contraposition and proof-by-contradiction logic, guiding LLMs to perform hypothesis negation, conflict derivation, and logical equivalence transformations. It further integrates direct and indirect reasoning via a multi-path fusion mechanism, enabling plug-and-play compatibility with Chain-of-Thought and its variants. Contribution/Results: Experiments across four logical reasoning and mathematical proof benchmarks demonstrate that DIR consistently enhances performance when combined with diverse baseline methods, validating that explicit modeling of indirect reasoning significantly improves LLMs’ rigor and generalizability in formal deduction.

Technology Category

Application Category

📝 Abstract
Recently, increasing attention has been focused on improving the ability of Large Language Models (LLMs) to perform complex reasoning. Advanced methods, such as Chain-of-Thought (CoT) and its variants, are found to enhance their reasoning skills by designing suitable prompts or breaking down complex problems into more manageable sub-problems. However, little concentration has been put on exploring the reasoning process, extit{i.e.}, we discovered that most methods resort to Direct Reasoning (DR) and disregard Indirect Reasoning (IR). This can make LLMs difficult to solve IR tasks, which are often encountered in the real world. To address this issue, we propose a Direct-Indirect Reasoning (DIR) method, which considers DR and IR as multiple parallel reasoning paths that are merged to derive the final answer. We stimulate LLMs to implement IR by crafting prompt templates incorporating the principles of contrapositive and contradiction. These templates trigger LLMs to assume the negation of the conclusion as true, combine it with the premises to deduce a conclusion, and utilize the logical equivalence of the contrapositive to enhance their comprehension of the rules used in the reasoning process. Our DIR method is simple yet effective and can be straightforwardly integrated with existing variants of CoT methods. Experimental results on four datasets related to logical reasoning and mathematic proof demonstrate that our DIR method, when combined with various baseline methods, significantly outperforms all the original methods.
Problem

Research questions and friction points this paper is trying to address.

Enhancing Large Language Models
Indirect Reasoning
Complex Problem Solving
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combined Reasoning
Counterexample Method
Contradiction Approach
🔎 Similar Papers
No similar papers found.
Y
Yanfang Zhang
Nanjing University of Science and Technology
Y
Yiliu Sun
Nanjing University of Science and Technology
Yibing Zhan
Yibing Zhan
Unknown affiliation
Dapeng Tao
Dapeng Tao
Yunnan University
Dacheng Tao
Dacheng Tao
Nanyang Technological University
artificial intelligencemachine learningcomputer visionimage processingdata mining
C
Chen Gong
Shanghai Jiao Tong University