๐ค AI Summary
In cross-architecture knowledge distillation (CAKD), structural mismatch between Vision Transformer (ViT) teachers and lightweight CNN students impedes efficient knowledge transfer. To address this, we propose a dual-teacher collaborative distillation framework that jointly leverages heterogeneous ViT and homogeneous CNN teachers. Our method introduces a prediction-difference-driven dynamic weighting mechanism, structural-discrepancy-aware residual feature distillation, and a lightweight auxiliary branch. By explicitly modeling and transferring architecture-agnostic discrepancy knowledge, it mitigates feature-space misalignment between teacher and student. Extensive experiments on HMDB51, EPIC-KITCHENS-100, and Kinetics-400 demonstrate state-of-the-art performance: our approach outperforms existing CAKD methods across all benchmarks, achieving up to a 5.95% absolute accuracy gain on HMDB51โsignificantly narrowing the performance gap for lightweight CNNs in video action recognition.
๐ Abstract
Vision Transformers (ViTs) have achieved strong performance in video action recognition, but their high computational cost limits their practicality. Lightweight CNNs are more efficient but suffer from accuracy gaps. Cross-Architecture Knowledge Distillation (CAKD) addresses this by transferring knowledge from ViTs to CNNs, yet existing methods often struggle with architectural mismatch and overlook the value of stronger homogeneous CNN teachers. To tackle these challenges, we propose a Dual-Teacher Knowledge Distillation framework that leverages both a heterogeneous ViT teacher and a homogeneous CNN teacher to collaboratively guide a lightweight CNN student. We introduce two key components: (1) Discrepancy-Aware Teacher Weighting, which dynamically fuses the predictions from ViT and CNN teachers by assigning adaptive weights based on teacher confidence and prediction discrepancy with the student, enabling more informative and effective supervision; and (2) a Structure Discrepancy-Aware Distillation strategy, where the student learns the residual features between ViT and CNN teachers via a lightweight auxiliary branch, focusing on transferable architectural differences without mimicking all of ViT's high-dimensional patterns. Extensive experiments on benchmarks including HMDB51, EPIC-KITCHENS-100, and Kinetics-400 demonstrate that our method consistently outperforms state-of-the-art distillation approaches, achieving notable performance improvements with a maximum accuracy gain of 5.95% on HMDB51.