🤖 AI Summary
This work addresses the high computational cost of training attention-based models such as Transformers by proposing a novel paradigm termed Attention-based Neural Teaching (AtteNT). AtteNT unifies non-parametric teaching theory with attention mechanisms for the first time, constructing an implicit mapping through importance-adaptive example selection. From the perspective of functional gradient descent, it reveals a fundamental consistency between teaching and model training. The method achieves significant acceleration without compromising accuracy: it reduces training time by 13.01% for large language models (LLMs) and by 20.58% for Vision Transformers (ViTs). Applicable to both from-scratch training and fine-tuning, AtteNT consistently maintains or even improves performance across diverse downstream tasks.
📝 Abstract
Attention learners, neural networks built on the attention mechanism, e.g., transformers, excel at learning the implicit relationships that relate sequences to their corresponding properties, e.g., mapping a given sequence of tokens to the probability of the next token. However, the learning process tends to be costly. To address this, we present a novel paradigm named Attention Neural Teaching (AtteNT) that reinterprets the learning process through a nonparametric teaching perspective. Specifically, the latter provides a theoretical framework for teaching mappings that are implicitly defined (i.e., nonparametric) via example selection. Such an implicit mapping is embodied through a dense set of sequence-property pairs, with the AtteNT teacher selecting a subset to accelerate convergence in attention learner training. By analytically investigating the role of attention on parameter-based gradient descent during training, and recasting the evolution of attention learners, shaped by parameter updates, through functional gradient descent in nonparametric teaching, we show for the first time that teaching attention learners is consistent with teaching importance-adaptive nonparametric learners. These new findings readily commit AtteNT to enhancing learning efficiency of attention learners. Specifically, we observe training time reductions of 13.01% for LLMs and 20.58% for ViTs, spanning both fine-tuning and training-from-scratch regimes. Crucially, these gains are achieved without compromising accuracy; in fact, performance is consistently preserved and often enhanced across a diverse set of downstream tasks.