TinyDrop: Tiny Model Guided Token Dropping for Vision Transformers

πŸ“… 2025-09-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the high computational cost and deployment challenges of Vision Transformers (ViTs), this paper proposes TinyDropβ€”a training-free, dynamic token pruning framework. The core innovation lies in a lightweight guidance model that, during inference, estimates the importance of each image token in real time, enabling selective removal of redundant tokens without fine-tuning the backbone network or altering its architecture. This design ensures plug-and-play applicability and cross-ViT architectural compatibility. TinyDrop integrates lightweight attention-based guidance with a token importance scoring mechanism. On standard image classification benchmarks, it reduces FLOPs by up to 80% while incurring less than a 0.3% accuracy drop. The method significantly enhances inference efficiency, establishing a novel paradigm for efficient ViT deployment.

Technology Category

Application Category

πŸ“ Abstract
Vision Transformers (ViTs) achieve strong performance in image classification but incur high computational costs from processing all image tokens. To reduce inference costs in large ViTs without compromising accuracy, we propose TinyDrop, a training-free token dropping framework guided by a lightweight vision model. The guidance model estimates the importance of tokens while performing inference, thereby selectively discarding low-importance tokens if large vit models need to perform attention calculations. The framework operates plug-and-play, requires no architectural modifications, and is compatible with diverse ViT architectures. Evaluations on standard image classification benchmarks demonstrate that our framework reduces FLOPs by up to 80% for ViTs with minimal accuracy degradation, highlighting its generalization capability and practical utility for efficient ViT-based classification.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational costs in Vision Transformers
Selectively drops low-importance image tokens
Maintains accuracy while decreasing FLOPs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight model guides token dropping
Selective discarding of low-importance tokens
Plug-and-play framework without architectural modifications
G
Guoxin Wang
School of Electrical and Electronic Engineering, University College Dublin, Ireland
Q
Qingyuan Wang
School of Electrical and Electronic Engineering, University College Dublin, Ireland
B
Binhua Huang
School of Electrical and Electronic Engineering, University College Dublin, Ireland
S
Shaowu Chen
College of Electronics and Information Engineering, Shenzhen University, China
Deepu John
Deepu John
University College Dublin
Edge ComputingIoTWearable SensingBiomedical Circuits and Systems