Layer-Order Inversion: Rethinking Latent Multi-Hop Reasoning in Large Language Models

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing research assumes that large language models perform multi-hop reasoning by sequentially computing bridging entities and answers layer by layer; however, this assumption lacks generality in real-world scenarios. Through systematic inter-layer probing analysis, this work identifies and names a novel phenomenon—“layer-order inversion”—where shallow MLP layers broadly recall relevant information, while deeper attention mechanisms selectively extract the final answer. Building on this insight, we propose a probabilistic recall-and-extraction framework that unifies the explanation of both the benefits of chain-of-thought reasoning and the intrinsic causes of multi-hop failures: even when individual hop knowledge is correct, errors arise if information is not effectively propagated across layers. This framework not only reinterprets prior layer-decoding evidence but also demonstrates its validity through decodability assessments and functional disentanglement of model components.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) perform well on multi-hop reasoning, yet how they internally compose multiple facts remains unclear. Recent work proposes \emph{hop-aligned circuit hypothesis}, suggesting that bridge entities are computed sequentially across layers before later-hop answers. Through systematic analyses on real-world multi-hop queries, we show that this hop-aligned assumption does not generalize: later-hop answer entities can become decodable earlier than bridge entities, a phenomenon we call \emph{layer-order inversion}, which strengthens with total hops. To explain this behavior, we propose a \emph{probabilistic recall-and-extract} framework that models multi-hop reasoning as broad probabilistic recall in shallow MLP layers followed by selective extraction in deeper attention layers. This framework is empirically validated through systematic probing analyses, reinterpreting prior layer-wise decoding evidence, explaining chain-of-thought gains, and providing a mechanistic diagnosis of multi-hop failures despite correct single-hop knowledge. Code is available at https://github.com/laquabe/Layer-Order-Inversion.
Problem

Research questions and friction points this paper is trying to address.

multi-hop reasoning
layer-order inversion
hop-aligned circuit hypothesis
latent reasoning
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

layer-order inversion
multi-hop reasoning
probabilistic recall-and-extract
mechanistic interpretability
large language models
🔎 Similar Papers
No similar papers found.