Gaussian Match-and-Copy: A Minimalist Benchmark for Studying Transformer Induction

πŸ“… 2026-02-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of studying the match-and-copy mechanism in Transformers on natural data, where retrieval and memory processes are inherently entangled. To disentangle these components, the authors propose GMC, the first minimal benchmark task that isolates second-order correlation signals. GMC leverages a Gaussian generative model to construct synthetic data and combines a simplified attention mechanism with gradient descent dynamics. Theoretically, they prove that parameter directions converge to the max-margin separator, revealing an implicit optimization bias toward hard matching decisions. Experiments demonstrate that GMC preserves key characteristics of match-and-copy circuits found in real Transformers, effectively differentiating architectures by their long-range retrieval capabilities and empirically validating the predicted theoretical properties of the optimization trajectory.

Technology Category

Application Category

πŸ“ Abstract
Match-and-copy is a core retrieval primitive used at inference time by large language models to retrieve a matching token from the context then copy its successor. Yet, understanding how this behavior emerges on natural data is challenging because retrieval and memorization are entangled. To disentangle the two, we introduce Gaussian Match-and-Copy (GMC), a minimalist benchmark that isolates long-range retrieval through pure second-order correlation signals. Numerical investigations show that this task retains key qualitative aspects of how Transformers develop match-and-copy circuits in practice, and separates architectures by their retrieval capabilities. We also analyze the optimization dynamics in a simplified attention setting. Although many solutions are a priori possible under a regression objective, including ones that do not implement retrieval, we identify an implicit-bias regime in which gradient descent drives the parameters to diverge while their direction aligns with the max-margin separator, yielding hard match selection. We prove this max-margin alignment for GD trajectories that reach vanishing empirical loss under explicit technical conditions.
Problem

Research questions and friction points this paper is trying to address.

match-and-copy
retrieval
memorization
Transformer
induction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gaussian Match-and-Copy
Transformer induction
implicit bias
long-range retrieval
max-margin alignment
πŸ”Ž Similar Papers
No similar papers found.