🤖 AI Summary
This work investigates how gradient descent training dynamically induces in-context learning (ICL) capabilities—specifically for contextual linear regression—in linear self-attention models. We analyze two distinct key-query (KQ) parameterizations: merged versus separated. Through multi-head linear attention modeling, fixed-point analysis, and scalar ODE reduction, we derive analytically tractable training trajectories. Our analysis reveals fundamentally different training dynamics: the merged parameterization exhibits a single-step loss drop and bistable fixed points, whereas the separated parameterization manifests saddle-point transitions and gradual growth in effective principal component count. Crucially, we rigorously prove that only the separated KQ parameterization enables dynamic expansion of the effective principal component count, thereby achieving progressive contextual linear regression. This constitutes the first characterization of ICL emergence from the perspective of training dynamics, establishing that parameterization choice fundamentally determines the evolutionary pathway—and feasibility—of ICL capability acquisition.
📝 Abstract
While attention-based models have demonstrated the remarkable ability of in-context learning, the theoretical understanding of how these models acquired this ability through gradient descent training is still preliminary. Towards answering this question, we study the gradient descent dynamics of multi-head linear self-attention trained for in-context linear regression. We examine two parametrizations of linear self-attention: one with the key and query weights merged as a single matrix (common in theoretical studies), and one with separate key and query matrices (closer to practical settings). For the merged parametrization, we show the training dynamics has two fixed points and the loss trajectory exhibits a single, abrupt drop. We derive an analytical time-course solution for a certain class of datasets and initialization. For the separate parametrization, we show the training dynamics has exponentially many fixed points and the loss exhibits saddle-to-saddle dynamics, which we reduce to scalar ordinary differential equations. During training, the model implements principal component regression in context with the number of principal components increasing over training time. Overall, we characterize how in-context learning abilities evolve during gradient descent training of linear attention, revealing dynamics of abrupt acquisition versus progressive improvements in models with different parametrizations.