LogiDynamics: Unraveling the Dynamics of Logical Inference in Large Language Model Reasoning

📅 2025-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited logical reasoning capabilities of large language models (LLMs), particularly in fundamental analogical reasoning tasks. We propose the first controllable analogical reasoning evaluation framework parameterized along three dimensions: modality (textual, visual, symbolic), difficulty level, and task format—enabling systematic dissection of LLMs’ dynamic responses to inductive, abductive, and deductive reasoning. Leveraging dynamic reasoning-path analysis, construction of a novel multimodal benchmark, and in-context learning generalization validation, we identify the “hypothesis selection–verification–refinement” paradigm as a scalable mechanism for enhancing logical reasoning performance. Crucially, we quantitatively characterize the performance boundaries and complementary strengths of distinct reasoning paradigms for the first time, empirically validating their cross-task transferability. Our work establishes a reproducible methodology and empirical foundation for logic-driven optimization of LLM reasoning. (149 words)

Technology Category

Application Category

📝 Abstract
Modern large language models (LLMs) employ various forms of logical inference, both implicitly and explicitly, when addressing reasoning tasks. Understanding how to optimally leverage these inference paradigms is critical for advancing LLMs' reasoning capabilities. This paper adopts an exploratory approach by introducing a controlled evaluation environment for analogical reasoning -- a fundamental cognitive task -- that is systematically parameterized across three dimensions: modality (textual, visual, symbolic), difficulty (easy, medium, hard), and task format (multiple-choice or free-text generation). We analyze the comparative dynamics of inductive, abductive, and deductive inference pipelines across these dimensions, and demonstrate that our findings generalize to broader in-context learning tasks. Additionally, we investigate advanced paradigms such as hypothesis selection, verification, and refinement, revealing their potential to scale up logical inference in LLM reasoning. This exploratory study provides a foundation for future research in enhancing LLM reasoning through systematic logical inference strategies.
Problem

Research questions and friction points this paper is trying to address.

Understanding logical inference dynamics in LLMs
Exploring optimal inference paradigms for reasoning tasks
Scaling logical inference in large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Controlled evaluation environment
Inductive, abductive, deductive analysis
Hypothesis selection, verification, refinement
🔎 Similar Papers
No similar papers found.