ViT-Linearizer: Distilling Quadratic Knowledge into Linear-Time Vision Models

📅 2025-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision Transformers (ViTs) suffer from quadratic computational complexity in self-attention, hindering efficient inference on high-resolution images and limiting practical deployment. To address this, we propose a cross-architecture knowledge distillation framework that transfers ViTs’ global representational capacity to linear-complexity recursive student models—specifically RNNs and Mamba. Our method introduces two key innovations: (i) joint optimization of activation matching constraints and masked representation reconstruction objectives to achieve fine-grained feature alignment while preserving structured semantic information; and (ii) architecture-specific adaptation of the Mamba backbone for vision tasks. Evaluated on ImageNet, our base-scale student model achieves 84.3% top-1 accuracy, with significantly accelerated inference speed and markedly reduced hardware resource consumption. This work establishes a novel paradigm for lightweight, high-resolution visual model deployment.

Technology Category

Application Category

📝 Abstract
Vision Transformers (ViTs) have delivered remarkable progress through global self-attention, yet their quadratic complexity can become prohibitive for high-resolution inputs. In this work, we present ViT-Linearizer, a cross-architecture distillation framework that transfers rich ViT representations into a linear-time, recurrent-style model. Our approach leverages 1) activation matching, an intermediate constraint that encourages student to align its token-wise dependencies with those produced by the teacher, and 2) masked prediction, a contextual reconstruction objective that requires the student to predict the teacher's representations for unseen (masked) tokens, to effectively distill the quadratic self-attention knowledge into the student while maintaining efficient complexity. Empirically, our method provides notable speedups particularly for high-resolution tasks, significantly addressing the hardware challenges in inference. Additionally, it also elevates Mamba-based architectures' performance on standard vision benchmarks, achieving a competitive 84.3% top-1 accuracy on ImageNet with a base-sized model. Our results underscore the good potential of RNN-based solutions for large-scale visual tasks, bridging the gap between theoretical efficiency and real-world practice.
Problem

Research questions and friction points this paper is trying to address.

Reducing quadratic complexity of Vision Transformers for high-resolution inputs
Distilling ViT knowledge into linear-time recurrent models
Improving efficiency and performance of vision models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-architecture distillation for linear-time models
Activation matching for token-wise dependency alignment
Masked prediction for contextual reconstruction
🔎 Similar Papers
No similar papers found.