How Do LLMs Perform Two-Hop Reasoning in Context?

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the emergence mechanism of two-hop logical reasoning in large language models (LLMs) under noisy premises, tracing the transition from random guessing to accurate inference. Methodologically, we train and analyze three-layer discretized Transformers using synthetic data, complemented by reverse engineering, training dynamics modeling, and cross-model behavioral experiments. We identify, for the first time, a distinct two-phase dynamical pattern in the emergence of two-hop reasoning capability: a prolonged phase of slow, stochastic performance followed by an abrupt transition to high accuracy. We propose an interpretable three-parameter causal model that quantitatively characterizes this phase transition and empirically validate its universality across LLMs of varying scales. Our approach achieves 100% accuracy on the two-hop reasoning task. This study provides the first mechanistic explanatory framework for the emergence of compositional reasoning in LLMs, bridging theoretical modeling with empirical training dynamics.

Technology Category

Application Category

📝 Abstract
"Socrates is human. All humans are mortal. Therefore, Socrates is mortal."This classical example demonstrates two-hop reasoning, where a conclusion logically follows from two connected premises. While transformer-based Large Language Models (LLMs) can make two-hop reasoning, they tend to collapse to random guessing when faced with distracting premises. To understand the underlying mechanism, we train a three-layer transformer on synthetic two-hop reasoning tasks. The training dynamics show two stages: a slow learning phase, where the 3-layer transformer performs random guessing like LLMs, followed by an abrupt phase transitions, where the 3-layer transformer suddenly reaches $100%$ accuracy. Through reverse engineering, we explain the inner mechanisms for how models learn to randomly guess between distractions initially, and how they learn to ignore distractions eventually. We further propose a three-parameter model that supports the causal claims for the mechanisms to the training dynamics of the transformer. Finally, experiments on LLMs suggest that the discovered mechanisms generalize across scales. Our methodologies provide new perspectives for scientific understandings of LLMs and our findings provide new insights into how reasoning emerges during training.
Problem

Research questions and friction points this paper is trying to address.

LLMs performance in two-hop reasoning
Impact of distracting premises on reasoning
Mechanisms of learning to ignore distractions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Three-layer transformer training
Reverse engineering inner mechanisms
Three-parameter model validation
🔎 Similar Papers
No similar papers found.