🤖 AI Summary
This work addresses the lack of fair, standardized efficiency evaluation protocols for Vision Transformers (ViTs) across diverse domains and experimental settings. We introduce the first open-source, reproducible large-scale efficiency benchmark, evaluating over 45 models on image classification across three dimensions: accuracy, inference latency, and memory footprint. Methodologically, we propose a Pareto frontier–based multi-objective optimization framework—the first systematic analysis of ViT Pareto optimality—revealing fundamental trade-offs among accuracy, speed, and resource consumption. Key findings include the superior parameter and memory efficiency of CNN-attention hybrid architectures, and the observation that scaling model depth/width generally yields greater efficiency gains than increasing input resolution. Our contributions are threefold: (1) establishing a standardized efficiency evaluation paradigm; (2) providing empirically grounded guidelines for model selection; and (3) inspiring new design principles for efficient ViTs.
📝 Abstract
Self-attention in Transformers comes with a high computational cost because of their quadratic computational complexity, but their effectiveness in addressing problems in language and vision has sparked extensive research aimed at enhancing their efficiency. However, diverse experimental conditions, spanning multiple input domains, prevent a fair comparison based solely on reported results, posing challenges for model selection. To address this gap in comparability, we perform a large-scale benchmark of more than 45 models for image classification, evaluating key efficiency aspects, including accuracy, speed, and memory usage. Our benchmark provides a standardized baseline for efficiency-oriented transformers. We analyze the results based on the Pareto front -- the boundary of optimal models. Surprisingly, despite claims of other models being more efficient, ViT remains Pareto optimal across multiple metrics. We observe that hybrid attention-CNN models exhibit remarkable inference memory- and parameter-efficiency. Moreover, our benchmark shows that using a larger model in general is more efficient than using higher resolution images. Thanks to our holistic evaluation, we provide a centralized resource for practitioners and researchers, facilitating informed decisions when selecting or developing efficient transformers.