The Role of Deductive and Inductive Reasoning in Large Language Models

📅 2024-10-03
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) face limitations in complex reasoning due to static prompting and insufficient adaptability. To address this, we propose DID, a novel input-driven reasoning framework grounded in cognitive alignment, which synergistically integrates deductive and inductive reasoning. DID introduces a dual-metric complexity assessment—combining the Littlestone dimension (a measure from computational learning theory) and information entropy—designed in accordance with cognitive science principles. This enables quantitative task difficulty estimation, dynamic decomposition of reasoning steps, and adaptive evolution of reasoning paths—eliminating reliance on fixed templates. Notably, DID is the first to incorporate the Littlestone dimension into LLM-based task difficulty modeling. Evaluated across multiple benchmarks—including AIW, MR-GSM8K, and Holiday Puzzle—DID achieves 70.3% accuracy on AIW, outperforming Tree-of-Thought (ToT) by 8.1 percentage points while reducing computational overhead. It further demonstrates superior robustness and interpretability on temporally complex reasoning tasks.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated impressive capabilities in reasoning tasks, yet their reliance on static prompt structures and limited adaptability to complex scenarios remains a significant challenge. In this paper, we propose the Deductive and InDuctive(DID) method, a novel framework that enhances LLM reasoning by dynamically integrating both deductive and inductive reasoning approaches. Drawing from cognitive science principles, DID implements a dual-metric complexity evaluation system that combines Littlestone dimension and information entropy to precisely assess task difficulty and guide decomposition strategies. DID enables the model to progressively adapt its reasoning pathways based on problem complexity, mirroring human cognitive processes. We evaluate DID's effectiveness across multiple benchmarks, including the AIW and MR-GSM8K, as well as our custom Holiday Puzzle dataset for temporal reasoning. Our results demonstrate significant improvements in reasoning quality and solution accuracy - achieving 70.3% accuracy on AIW (compared to 62.2% for Tree of Thought) while maintaining lower computational costs. The success of DID in improving LLM performance while preserving computational efficiency suggests promising directions for developing more cognitively aligned and capable language models. Our work contributes a theoretically grounded, input-centric approach to enhancing LLM reasoning capabilities, offering an efficient alternative to traditional output-exploration methods.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM reasoning capabilities
Dynamic integration of deductive and inductive reasoning
Improving computational efficiency in reasoning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

DID method integrates deductive and inductive reasoning
Dual-metric system evaluates task complexity dynamically
Adapts reasoning pathways based on problem complexity
🔎 Similar Papers
No similar papers found.
C
Chengkun Cai
University of Edinburgh
X
Xu Zhao
University of Edinburgh
H
Haoliang Liu
University of Manchester
Zhongyu Jiang
Zhongyu Jiang
Apple Inc.
Human Intelligence
T
Tianfang Zhang
Tsinghua University
Z
Zongkai Wu
FancyTech
J
Jenq-Neng Hwang
University of Washington
L
Lei Li
University of Copenhagen, University of Washington