🤖 AI Summary
This work investigates the regression learning capability of Transformers on noisy manifold data: inputs lie in a tubular neighborhood of a low-dimensional manifold, and ground-truth labels are determined by the projection of data onto the manifold. Under the realistic setting where high-dimensional ambient noise coexists with low intrinsic-dimensional structure, we establish—for the first time—upper bounds on both approximation and generalization errors of Transformers, proving that these bounds depend solely on the manifold’s intrinsic dimension, not the ambient dimension. Methodologically, we introduce a novel construction of Transformer representations based on elementary arithmetic operations, providing a new analytical tool for theoretical study of deep models under the manifold hypothesis. Our analysis demonstrates that Transformers automatically exploit the underlying low-complexity geometric structure of the task, achieving efficient learning even under significant ambient noise.
📝 Abstract
Transformers serve as the foundational architecture for large language and video generation models, such as GPT, BERT, SORA and their successors. Empirical studies have demonstrated that real-world data and learning tasks exhibit low-dimensional structures, along with some noise or measurement error. The performance of transformers tends to depend on the intrinsic dimension of the data/tasks, though theoretical understandings remain largely unexplored for transformers. This work establishes a theoretical foundation by analyzing the performance of transformers for regression tasks involving noisy input data on a manifold. Specifically, the input data are in a tubular neighborhood of a manifold, while the ground truth function depends on the projection of the noisy data onto the manifold. We prove approximation and generalization errors which crucially depend on the intrinsic dimension of the manifold. Our results demonstrate that transformers can leverage low-complexity structures in learning task even when the input data are perturbed by high-dimensional noise. Our novel proof technique constructs representations of basic arithmetic operations by transformers, which may hold independent interest.