π€ AI Summary
This work addresses the challenge of studying the match-and-copy mechanism in Transformers on natural data, where retrieval and memory processes are inherently entangled. To disentangle these components, the authors propose GMC, the first minimal benchmark task that isolates second-order correlation signals. GMC leverages a Gaussian generative model to construct synthetic data and combines a simplified attention mechanism with gradient descent dynamics. Theoretically, they prove that parameter directions converge to the max-margin separator, revealing an implicit optimization bias toward hard matching decisions. Experiments demonstrate that GMC preserves key characteristics of match-and-copy circuits found in real Transformers, effectively differentiating architectures by their long-range retrieval capabilities and empirically validating the predicted theoretical properties of the optimization trajectory.
π Abstract
Match-and-copy is a core retrieval primitive used at inference time by large language models to retrieve a matching token from the context then copy its successor. Yet, understanding how this behavior emerges on natural data is challenging because retrieval and memorization are entangled. To disentangle the two, we introduce Gaussian Match-and-Copy (GMC), a minimalist benchmark that isolates long-range retrieval through pure second-order correlation signals. Numerical investigations show that this task retains key qualitative aspects of how Transformers develop match-and-copy circuits in practice, and separates architectures by their retrieval capabilities. We also analyze the optimization dynamics in a simplified attention setting. Although many solutions are a priori possible under a regression objective, including ones that do not implement retrieval, we identify an implicit-bias regime in which gradient descent drives the parameters to diverge while their direction aligns with the max-margin separator, yielding hard match selection. We prove this max-margin alignment for GD trajectories that reach vanishing empirical loss under explicit technical conditions.