A Lightweight Convolution and Vision Transformer integrated model with Multi-scale Self-attention Mechanism

📅 2025-08-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision Transformers (ViTs) face deployment challenges in resource-constrained settings due to their large model size, high computational cost, and weak local modeling capability. To address these limitations, we propose SAEViT—a lightweight hybrid architecture integrating convolutional and Transformer-based components. Its key innovations include Sparse Aggregation Attention (enabling adaptive token sampling), Channel-Interactive Feed-Forward Network (CIFFN), and hierarchical Depthwise Separable Convolutions (DWSConv), jointly optimizing computational efficiency and representational power. Sparse attention reduces redundant computation, while deconvolution-based feature restoration and multi-scale convolutions enhance joint local–channel modeling. On ImageNet-1K, SAEViT achieves 76.3% and 79.6% Top-1 accuracy with only 0.8G and 1.3G FLOPs, respectively—outperforming existing lightweight models of comparable complexity. The design thus bridges the gap between efficiency and strong representation learning.

Technology Category

Application Category

📝 Abstract
Vision Transformer (ViT) has prevailed in computer vision tasks due to its strong long-range dependency modelling ability. However, its large model size with high computational cost and weak local feature modeling ability hinder its application in real scenarios. To balance computation efficiency and performance, we propose SAEViT (Sparse-Attention-Efficient-ViT), a lightweight ViT based model with convolution blocks, in this paper to achieve efficient downstream vision tasks. Specifically, SAEViT introduces a Sparsely Aggregated Attention (SAA) module that performs adaptive sparse sampling based on image redundancy and recovers the feature map via deconvolution operation, which significantly reduces the computational complexity of attention operations. In addition, a Channel-Interactive Feed-Forward Network (CIFFN) layer is developed to enhance inter-channel information exchange through feature decomposition and redistribution, mitigating redundancy in traditional feed-forward networks (FNN). Finally, a hierarchical pyramid structure with embedded depth-wise separable convolutional blocks (DWSConv) is devised to further strengthen convolutional features. Extensive experiments on mainstream datasets show that SAEViT achieves Top-1 accuracies of 76.3% and 79.6% on the ImageNet-1K classification task with only 0.8 GFLOPs and 1.3 GFLOPs, respectively, demonstrating a lightweight solution for various fundamental vision tasks.
Problem

Research questions and friction points this paper is trying to address.

Reducing Vision Transformer computational cost and model size
Enhancing local feature modeling in Vision Transformer
Balancing efficiency and performance for vision tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparsely Aggregated Attention reduces computational complexity
Channel-Interactive Feed-Forward Network enhances information exchange
Hierarchical pyramid structure with depth-wise separable convolutions
🔎 Similar Papers
No similar papers found.
Y
Yi Zhang
Department of Computer Science, Sichuan University, China
Lingxiao Wei
Lingxiao Wei
Department of Computer Science, Sichuan University, China
Bowei Zhang
Bowei Zhang
Peking University
Ziwei Liu
Ziwei Liu
Associate Professor, Nanyang Technological University
Computer VisionMachine LearningComputer Graphics
K
Kai Yi
Sichuan Police College, Intelligent Policing Key Laboratory of Sichuan Province, Luzhou, China
S
Shu Hu
Department of Computer Science, Sichuan University, China