Logical Phase Transitions: Understanding Collapse in LLM Logical Reasoning

📅 2026-01-06
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant performance degradation of large language models (LLMs) in high-complexity symbolic logical reasoning tasks. It reveals, for the first time, a “logical phase transition” phenomenon wherein LLMs’ reasoning capabilities sharply decline as logical depth increases. To mitigate this, the authors propose a neuro-symbolic curriculum tuning framework that adaptively aligns natural language with formal logical symbols and dynamically adjusts training near the phase transition boundary. Evaluated across five benchmarks, the approach effectively alleviates reasoning collapse at high complexity levels, yielding average accuracy improvements of 1.26 and 3.95 under naive prompting and chain-of-thought (CoT) settings, respectively. Moreover, it substantially enhances generalization to unseen logical compositions.

Technology Category

Application Category

📝 Abstract
Symbolic logical reasoning is a critical yet underexplored capability of large language models (LLMs), providing reliable and verifiable decision-making in high-stakes domains such as mathematical reasoning and legal judgment. In this study, we present a systematic analysis of logical reasoning under controlled increases in logical complexity, and reveal a previously unrecognized phenomenon, which we term Logical Phase Transitions: rather than degrading smoothly, logical reasoning performance remains stable within a regime but collapses abruptly beyond a critical logical depth, mirroring physical phase transitions such as water freezing beyond a critical temperature threshold. Building on this insight, we propose Neuro-Symbolic Curriculum Tuning, a principled framework that adaptively aligns natural language with logical symbols to establish a shared representation, and reshapes training dynamics around phase-transition boundaries to progressively strengthen reasoning at increasing logical depths. Experiments on five benchmarks show that our approach effectively mitigates logical reasoning collapse at high complexity, yielding average accuracy gains of +1.26 in naive prompting and +3.95 in CoT, while improving generalization to unseen logical compositions. Code and data are available at https://github.com/AI4SS/Logical-Phase-Transitions.
Problem

Research questions and friction points this paper is trying to address.

logical reasoning
large language models
phase transitions
logical complexity
reasoning collapse
Innovation

Methods, ideas, or system contributions that make the work stand out.

Logical Phase Transitions
Neuro-Symbolic Curriculum Tuning
logical reasoning collapse
symbolic reasoning
large language models
🔎 Similar Papers
No similar papers found.