Towards a Mechanistic Understanding of Propositional Logical Reasoning in Large Language Models

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the internal computational mechanisms underlying propositional logic reasoning in large language models. Leveraging the PropLogic-MI dataset, the authors conduct an interpretability analysis of Qwen3 (8B/14B) and propose a unified four-component architecture comprising staged computation, information propagation, fact retrieval, and specialized attention heads. Through functional classification of attention heads, cross-layer information flow tracing, and boundary token aggregation, the work systematically reveals how the model orchestrates multi-layer computations to perform both one-hop and two-hop logical inference. The proposed architecture demonstrates robust generalization across model scales, rule types, and reasoning depths, offering mechanistic evidence for the structured logical reasoning capabilities of large language models.

Technology Category

Application Category

📝 Abstract
Understanding how Large Language Models (LLMs) perform logical reasoning internally remains a fundamental challenge. While prior mechanistic studies focus on identifying taskspecific circuits, they leave open the question of what computational strategies LLMs employ for propositional reasoning. We address this gap through comprehensive analysis of Qwen3 (8B and 14B) on PropLogic-MI, a controlled dataset spanning 11 propositional logic rule categories across one-hop and two-hop reasoning. Rather than asking''which components are necessary,''we ask''how does the model organize computation?''Our analysis reveals a coherent computational architecture comprising four interlocking mechanisms: Staged Computation (layer-wise processing phases), Information Transmission (information flow aggregation at boundary tokens), Fact Retrospection (persistent re-access of source facts), and Specialized Attention Heads (functionally distinct head types). These mechanisms generalize across model scales, rule types, and reasoning depths, providing mechanistic evidence that LLMs employ structured computational strategies for logical reasoning.
Problem

Research questions and friction points this paper is trying to address.

propositional logic
logical reasoning
large language models
mechanistic understanding
computational strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

mechanistic interpretability
propositional logic reasoning
computational architecture
staged computation
specialized attention heads
🔎 Similar Papers
No similar papers found.
D
Danchun Chen
MOE Key Lab of Computational Linguistics, Peking University
Q
Qiyao Yan
MOE Key Lab of Computational Linguistics, Peking University
Liangming Pan
Liangming Pan
Assistant Professor, School of Computer Science, Peking University
Natural Language ProcessingLarge Language ModelsMachine Learning