Nonparametric Teaching of Attention Learners

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost of training attention-based models such as Transformers by proposing a novel paradigm termed Attention-based Neural Teaching (AtteNT). AtteNT unifies non-parametric teaching theory with attention mechanisms for the first time, constructing an implicit mapping through importance-adaptive example selection. From the perspective of functional gradient descent, it reveals a fundamental consistency between teaching and model training. The method achieves significant acceleration without compromising accuracy: it reduces training time by 13.01% for large language models (LLMs) and by 20.58% for Vision Transformers (ViTs). Applicable to both from-scratch training and fine-tuning, AtteNT consistently maintains or even improves performance across diverse downstream tasks.

Technology Category

Application Category

📝 Abstract
Attention learners, neural networks built on the attention mechanism, e.g., transformers, excel at learning the implicit relationships that relate sequences to their corresponding properties, e.g., mapping a given sequence of tokens to the probability of the next token. However, the learning process tends to be costly. To address this, we present a novel paradigm named Attention Neural Teaching (AtteNT) that reinterprets the learning process through a nonparametric teaching perspective. Specifically, the latter provides a theoretical framework for teaching mappings that are implicitly defined (i.e., nonparametric) via example selection. Such an implicit mapping is embodied through a dense set of sequence-property pairs, with the AtteNT teacher selecting a subset to accelerate convergence in attention learner training. By analytically investigating the role of attention on parameter-based gradient descent during training, and recasting the evolution of attention learners, shaped by parameter updates, through functional gradient descent in nonparametric teaching, we show for the first time that teaching attention learners is consistent with teaching importance-adaptive nonparametric learners. These new findings readily commit AtteNT to enhancing learning efficiency of attention learners. Specifically, we observe training time reductions of 13.01% for LLMs and 20.58% for ViTs, spanning both fine-tuning and training-from-scratch regimes. Crucially, these gains are achieved without compromising accuracy; in fact, performance is consistently preserved and often enhanced across a diverse set of downstream tasks.
Problem

Research questions and friction points this paper is trying to address.

attention learners
nonparametric teaching
training efficiency
sequence-property mapping
computational cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

nonparametric teaching
attention learners
example selection
functional gradient descent
training efficiency
🔎 Similar Papers
No similar papers found.
Chen Zhang
Chen Zhang
The University of Hong Kong
Statistical machine learningNonparametric methods
J
Jianghui Wang
King Abdullah University of Science and Technology
B
Bingyang Cheng
The University of Hong Kong
Z
Zhongtao Chen
The University of Hong Kong
W
Wendong Xu
The University of Hong Kong
C
Cong Wang
Independent Researcher
Marco Canini
Marco Canini
Professor of Computer Science, KAUST
SystemsNetworkingDistributed SystemsMachine Learning
Francesco Orabona
Francesco Orabona
Associate Professor, KAUST
Online LearningMachine LearningOptimizationLearning Theory
Y
Yik Chung Wu
The University of Hong Kong
N
Ngai Wong
The University of Hong Kong