🤖 AI Summary
Vision Transformers (ViTs) face deployment challenges in resource-constrained settings due to their large model size, high computational cost, and weak local modeling capability. To address these limitations, we propose SAEViT—a lightweight hybrid architecture integrating convolutional and Transformer-based components. Its key innovations include Sparse Aggregation Attention (enabling adaptive token sampling), Channel-Interactive Feed-Forward Network (CIFFN), and hierarchical Depthwise Separable Convolutions (DWSConv), jointly optimizing computational efficiency and representational power. Sparse attention reduces redundant computation, while deconvolution-based feature restoration and multi-scale convolutions enhance joint local–channel modeling. On ImageNet-1K, SAEViT achieves 76.3% and 79.6% Top-1 accuracy with only 0.8G and 1.3G FLOPs, respectively—outperforming existing lightweight models of comparable complexity. The design thus bridges the gap between efficiency and strong representation learning.
📝 Abstract
Vision Transformer (ViT) has prevailed in computer vision tasks due to its strong long-range dependency modelling ability. However, its large model size with high computational cost and weak local feature modeling ability hinder its application in real scenarios. To balance computation efficiency and performance, we propose SAEViT (Sparse-Attention-Efficient-ViT), a lightweight ViT based model with convolution blocks, in this paper to achieve efficient downstream vision tasks. Specifically, SAEViT introduces a Sparsely Aggregated Attention (SAA) module that performs adaptive sparse sampling based on image redundancy and recovers the feature map via deconvolution operation, which significantly reduces the computational complexity of attention operations. In addition, a Channel-Interactive Feed-Forward Network (CIFFN) layer is developed to enhance inter-channel information exchange through feature decomposition and redistribution, mitigating redundancy in traditional feed-forward networks (FNN). Finally, a hierarchical pyramid structure with embedded depth-wise separable convolutional blocks (DWSConv) is devised to further strengthen convolutional features. Extensive experiments on mainstream datasets show that SAEViT achieves Top-1 accuracies of 76.3% and 79.6% on the ImageNet-1K classification task with only 0.8 GFLOPs and 1.3 GFLOPs, respectively, demonstrating a lightweight solution for various fundamental vision tasks.