🤖 AI Summary
Vision Transformers (ViTs) suffer from quadratic computational complexity in self-attention, hindering efficient inference on high-resolution images and limiting practical deployment. To address this, we propose a cross-architecture knowledge distillation framework that transfers ViTs’ global representational capacity to linear-complexity recursive student models—specifically RNNs and Mamba. Our method introduces two key innovations: (i) joint optimization of activation matching constraints and masked representation reconstruction objectives to achieve fine-grained feature alignment while preserving structured semantic information; and (ii) architecture-specific adaptation of the Mamba backbone for vision tasks. Evaluated on ImageNet, our base-scale student model achieves 84.3% top-1 accuracy, with significantly accelerated inference speed and markedly reduced hardware resource consumption. This work establishes a novel paradigm for lightweight, high-resolution visual model deployment.
📝 Abstract
Vision Transformers (ViTs) have delivered remarkable progress through global self-attention, yet their quadratic complexity can become prohibitive for high-resolution inputs. In this work, we present ViT-Linearizer, a cross-architecture distillation framework that transfers rich ViT representations into a linear-time, recurrent-style model. Our approach leverages 1) activation matching, an intermediate constraint that encourages student to align its token-wise dependencies with those produced by the teacher, and 2) masked prediction, a contextual reconstruction objective that requires the student to predict the teacher's representations for unseen (masked) tokens, to effectively distill the quadratic self-attention knowledge into the student while maintaining efficient complexity. Empirically, our method provides notable speedups particularly for high-resolution tasks, significantly addressing the hardware challenges in inference. Additionally, it also elevates Mamba-based architectures' performance on standard vision benchmarks, achieving a competitive 84.3% top-1 accuracy on ImageNet with a base-sized model. Our results underscore the good potential of RNN-based solutions for large-scale visual tasks, bridging the gap between theoretical efficiency and real-world practice.