🤖 AI Summary
This paper investigates the context memory error of nonlinear attention mechanisms in the high-dimensional regime where both the sample size $n$ and feature dimension $p$ diverge, with $n/p o gamma > 0$. Addressing the lack of rigorous theoretical understanding of nonlinear attention’s representational capacity, we develop the first asymptotic analysis by integrating large-kernel random matrix theory with high-dimensional kernel methods. Our theoretical characterization precisely quantifies the asymptotic memory error of nonlinear attention. Both analysis and numerical experiments demonstrate that input structure—particularly directional alignment between attention weights and signal subspace—substantially mitigates the bias induced by nonlinear activations; remarkably, when attention weights align with the input signal direction, nonlinear attention achieves *lower* memory error than its linear counterpart. This work uncovers a novel synergy between nonlinearity and structural priors in enhancing memory performance, providing foundational theoretical guidance for designing efficient and interpretable attention architectures.
📝 Abstract
Attention mechanisms have revolutionized machine learning (ML) by enabling efficient modeling of global dependencies across inputs. Their inherently parallelizable structures allow for efficient scaling with the exponentially increasing size of both pretrained data and model parameters. Yet, despite their central role as the computational backbone of modern large language models (LLMs), the theoretical understanding of Attentions, especially in the nonlinear setting, remains limited.
In this paper, we provide a precise characterization of the emph{in-context memorization error} of emph{nonlinear Attention}, in the high-dimensional proportional regime where the number of input tokens $n$ and their embedding dimension $p$ are both large and comparable. Leveraging recent advances in the theory of large kernel random matrices, we show that nonlinear Attention typically incurs higher memorization error than linear ridge regression on random inputs. However, this gap vanishes, and can even be reversed, when the input exhibits statistical structure, particularly when the Attention weights align with the input signal direction. Our results reveal how nonlinearity and input structure interact with each other to govern the memorization performance of nonlinear Attention. The theoretical insights are supported by numerical experiments.