🤖 AI Summary
This work establishes tight, architecture-dependent generalization error bounds for Transformer models. By integrating shifted Rademacher complexity with structural properties of the model—such as matrix rank, norms, and empirical covering numbers—it derives the first explicit generalization bounds tailored to single-layer single-head, single-layer multi-head, and multi-layer Transformers. The approach dispenses with the conventional assumption of bounded input features, thereby accommodating unbounded inputs and heavy-tailed distributions. The resulting bounds achieve the theoretically optimal convergence rate, significantly refining the characterization of the generalization capability of Transformer architectures.
📝 Abstract
This paper studies generalization error bounds for Transformer models. Based on the offset Rademacher complexity, we derive sharper generalization bounds for different Transformer architectures, including single-layer single-head, single-layer multi-head, and multi-layer Transformers. We first express the excess risk of Transformers in terms of the offset Rademacher complexity. By exploiting its connection with the empirical covering numbers of the corresponding hypothesis spaces, we obtain excess risk bounds that achieve optimal convergence rates up to constant factors. We then derive refined excess risk bounds by upper bounding the covering numbers of Transformer hypothesis spaces using matrix ranks and matrix norms, leading to precise, architecture-dependent generalization bounds. Finally, we relax the boundedness assumption on feature mappings and extend our theoretical results to settings with unbounded (sub-Gaussian) features and heavy-tailed distributions.