Learning from Stochastic Teacher Representations Using Student-Guided Knowledge Distillation

📅 2025-04-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the tension between model compression and performance enhancement under resource constraints, this paper proposes Stochastic Self-Distillation (SSD): a framework that trains only a single student model. During distillation, diverse teacher representations are stochastically generated via Dropout, while the student’s own representations dynamically guide the selection and weighting of task-relevant knowledge—mitigating misalignment between random teacher features and downstream tasks. SSD pioneers the “student-guided knowledge distillation” paradigm, requiring no additional parameters or inference overhead, and incorporates a lightweight feature alignment and adaptive weighting mechanism. Evaluated across sentiment analysis, wearable biosignal classification (UCR/HAR), and image classification, SSD consistently surpasses state-of-the-art methods, delivering significant accuracy gains without increasing model size, computational complexity, or inference latency.

Technology Category

Application Category

📝 Abstract
Advances in self-distillation have shown that when knowledge is distilled from a teacher to a student using the same deep learning (DL) architecture, the student performance can surpass the teacher particularly when the network is overparameterized and the teacher is trained with early stopping. Alternatively, ensemble learning also improves performance, although training, storing, and deploying multiple models becomes impractical as the number of models grows. Even distilling an ensemble to a single student model or weight averaging methods first requires training of multiple teacher models and does not fully leverage the inherent stochasticity for generating and distilling diversity in DL models. These constraints are particularly prohibitive in resource-constrained or latency-sensitive applications such as wearable devices. This paper proposes to train only one model and generate multiple diverse teacher representations using distillation-time dropout. However, generating these representations stochastically leads to noisy representations that are misaligned with the learned task. To overcome this problem, a novel stochastic self-distillation (SSD) training strategy is introduced for filtering and weighting teacher representation to distill from task-relevant representations only, using student-guided knowledge distillation (SGKD). The student representation at each distillation step is used as authority to guide the distillation process. Experimental results on real-world affective computing, wearable/biosignal datasets from the UCR Archive, the HAR dataset, and image classification datasets show that the proposed SSD method can outperform state-of-the-art methods without increasing the model size at both training and testing time, and incurs negligible computational complexity compared to state-of-the-art ensemble learning and weight averaging methods.
Problem

Research questions and friction points this paper is trying to address.

Improving student model performance beyond teacher in distillation
Reducing resource needs by avoiding multiple teacher models
Filtering noisy teacher representations via student-guided distillation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses distillation-time dropout for diverse teacher representations
Introduces student-guided knowledge distillation for filtering
Maintains performance without increasing model size
🔎 Similar Papers
No similar papers found.