π€ AI Summary
To address the high computational cost and deployment challenges of Vision Transformers (ViTs), this paper proposes TinyDropβa training-free, dynamic token pruning framework. The core innovation lies in a lightweight guidance model that, during inference, estimates the importance of each image token in real time, enabling selective removal of redundant tokens without fine-tuning the backbone network or altering its architecture. This design ensures plug-and-play applicability and cross-ViT architectural compatibility. TinyDrop integrates lightweight attention-based guidance with a token importance scoring mechanism. On standard image classification benchmarks, it reduces FLOPs by up to 80% while incurring less than a 0.3% accuracy drop. The method significantly enhances inference efficiency, establishing a novel paradigm for efficient ViT deployment.
π Abstract
Vision Transformers (ViTs) achieve strong performance in image classification but incur high computational costs from processing all image tokens. To reduce inference costs in large ViTs without compromising accuracy, we propose TinyDrop, a training-free token dropping framework guided by a lightweight vision model. The guidance model estimates the importance of tokens while performing inference, thereby selectively discarding low-importance tokens if large vit models need to perform attention calculations. The framework operates plug-and-play, requires no architectural modifications, and is compatible with diverse ViT architectures. Evaluations on standard image classification benchmarks demonstrate that our framework reduces FLOPs by up to 80% for ViTs with minimal accuracy degradation, highlighting its generalization capability and practical utility for efficient ViT-based classification.