🤖 AI Summary
This work investigates the reasons behind the superior performance of Transformers in non-convex optimization, with a focus on the relationship between optimization error and sequence length. By analyzing a shallow multi-head Transformer trained via kernelized mechanisms and projected gradient descent, the authors theoretically establish that the model width need only grow logarithmically with the sample size to guarantee finite-time non-asymptotic convergence, and that the optimization error remains independent of sequence length. This result overcomes the well-known limitation in recurrent architectures, where optimization error typically grows exponentially with sequence length. Numerical experiments within a teacher-student framework corroborate the theoretically predicted scaling laws, highlighting the structural advantage of Transformers in optimizing long sequences.
📝 Abstract
Understanding why Transformers perform so well remains challenging due to their non-convex optimization landscape. In this work, we analyze a shallow Transformer with $m$ independent heads trained by projected gradient descent in the kernel regime. Our analysis reveals two main findings: (i) the width required for nonasymptotic guarantees scales only logarithmically with the sample size $n$, and (ii) the optimization error is independent of the sequence length $T$. This contrasts sharply with recurrent architectures, where the optimization error can grow exponentially with $T$. The trade-off is memory: to keep the full context, the Transformer's memory requirement grows with the sequence length. We validate our theoretical results numerically in a teacher-student setting and confirm the predicted scaling laws for Transformers.