π€ AI Summary
Can a two-layer, single-head Transformer model arbitrary-order Markov processes?
Method: Through theoretical analysis of the attention mechanism, we explicitly construct inductive attention heads that model a $k$-order Markov chain as a conditional $k$-gram model, and simplify the learning dynamics to precisely characterize representational capacity.
Contribution: We provide the first rigorous proof that a two-layer, single-head Transformer is sufficient to exactly represent any $k$-order Markov process. Our analysis yields the tightest known theoretical characterization of the trade-off between depth and modeling capacity, demonstrating that shallow architectures already possess strong contextual learning capabilities. Moreover, we elucidate how effective contextual representations emerge spontaneously during training. This result establishes foundational theoretical insights into the inductive biases of Transformers and the fundamental nature of sequence modeling.
π Abstract
In-context learning (ICL) is a hallmark capability of transformers, through which trained models learn to adapt to new tasks by leveraging information from the input context. Prior work has shown that ICL emerges in transformers due to the presence of special circuits called induction heads. Given the equivalence between induction heads and conditional k-grams, a recent line of work modeling sequential inputs as Markov processes has revealed the fundamental impact of model depth on its ICL capabilities: while a two-layer transformer can efficiently represent a conditional 1-gram model, its single-layer counterpart cannot solve the task unless it is exponentially large. However, for higher order Markov sources, the best known constructions require at least three layers (each with a single attention head) - leaving open the question: can a two-layer single-head transformer represent any kth-order Markov process? In this paper, we precisely address this and theoretically show that a two-layer transformer with one head per layer can indeed represent any conditional k-gram. Thus, our result provides the tightest known characterization of the interplay between transformer depth and Markov order for ICL. Building on this, we further analyze the learning dynamics of our two-layer construction, focusing on a simplified variant for first-order Markov chains, illustrating how effective in-context representations emerge during training. Together, these results deepen our current understanding of transformer-based ICL and illustrate how even shallow architectures can surprisingly exhibit strong ICL capabilities on structured sequence modeling tasks.