Empowering LLMs with Logical Reasoning: A Comprehensive Survey

📅 2025-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses two core challenges in large language models’ (LLMs) logical reasoning: low accuracy on complex logical tasks and inconsistency across multi-turn question answering. Methodologically, it proposes a unified framework spanning external solver integration, prompt engineering, pretraining, and fine-tuning. It introduces the first multidimensional taxonomy of logical consistency—covering entailment, negation, transitivity, factuality, and their combinations—and extends it to modal logic and multi-constraint satisfaction. Key technical components include symbolic solver integration, tree-of-thought prompting, logic-aware pretraining objectives, consistency-regularized fine-tuning, and a multi-granularity evaluation protocol. Contributions include a systematic unification of major logical reasoning benchmarks (e.g., ProofWriter, LogiQA) and evaluation metrics, yielding significant improvements in both accuracy on complex logical problems and cross-turn response consistency.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have achieved remarkable successes on various natural language tasks. However, recent studies have found that there are still significant challenges to the logical reasoning abilities of LLMs. This paper summarizes and categorizes the main challenges into two aspects: (1) Logical question answering, LLMs often fail to generate the correct answer within complex logical problem which requires sophisticated deductive, inductive or abductive reasoning given a collection of premises and constrains. (2) Logical consistency, LLMs are prone to producing responses contradicting themselves across different questions. For example, a state-of-the-art Macaw question-answering LLM answers Yes to both questions Is a magpie a bird? and Does a bird have wings? but answers No to Does a magpie have wings?. To facilitate this research direction, we comprehensively investigate the most cutting-edge methods and propose detailed taxonomies of these methods. Specifically, to accurately answer complex logic questions, previous methods can be categorized based on reliance on external solvers, prompts, pretraining, and fine-tuning. To avoid logical contradictions, we discuss concepts and solutions of various logical consistencies, including implication, negation, transitivity, factuality consistency, and their composites. In addition, we review commonly used benchmark datasets and evaluation metrics, and discuss promising research directions, such as extensions to modal logic to account for uncertainty, and efficient algorithms satisfying multiple logical consistencies simultaneously.
Problem

Research questions and friction points this paper is trying to address.

Enhance LLMs' logical reasoning abilities.
Address LLMs' inconsistencies in logical responses.
Classify methods for complex logic question answering.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhances LLMs logical reasoning
Utilizes external solvers and prompts
Addresses multiple logical consistencies
🔎 Similar Papers
No similar papers found.
F
Fengxiang Cheng
Institute for Logic, Language and Computation, University of Amsterdam
H
Haoxuan Li
Center for Data Science, Peking University; Machine Learning Department, MBZUAI
Fenrong Liu
Fenrong Liu
Professor of Logic, Tsinghua University
preference logicsocial epistemic logicgraph game logichitstory of chinese logicAI logics
R
R. V. Rooij
Institute for Logic, Language and Computation, University of Amsterdam
K
Kun Zhang
Department of Philosophy, CMU
Zhouchen Lin
Zhouchen Lin
Professor, Peking University; Fellow of IEEE, IAPR, CSIG & AAIA; ex-VP of Samsung Research
machine learningcomputer visionimage processingnumerical optimization