CoSwin: Convolution Enhanced Hierarchical Shifted Window Attention For Small-Scale Vision

📅 2025-09-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited local feature extraction capability of Vision Transformers (ViTs) on small-scale datasets—stemming from their lack of inherent local inductive bias—this paper proposes a convolution-enhanced hierarchical shifted-window attention architecture. The method integrates two key components: (1) a learnable local feature enhancement module embedded within each attention block to enable dynamic fusion of local and global representations; and (2) hierarchical shifted-window self-attention combined with depthwise separable convolutions to explicitly inject locality and spatial structural awareness. Evaluated on CIFAR-10, CIFAR-100, and Tiny ImageNet, the proposed model consistently outperforms existing ViT variants, achieving up to a 4.92% absolute accuracy improvement. These results demonstrate its effectiveness, generalization capability, and robustness in data-scarce regimes.

Technology Category

Application Category

📝 Abstract
Vision Transformers (ViTs) have achieved impressive results in computer vision by leveraging self-attention to model long-range dependencies. However, their emphasis on global context often comes at the expense of local feature extraction in small datasets, particularly due to the lack of key inductive biases such as locality and translation equivariance. To mitigate this, we propose CoSwin, a novel feature-fusion architecture that augments the hierarchical shifted window attention with localized convolutional feature learning. Specifically, CoSwin integrates a learnable local feature enhancement module into each attention block, enabling the model to simultaneously capture fine-grained spatial details and global semantic structure. We evaluate CoSwin on multiple image classification benchmarks including CIFAR-10, CIFAR-100, MNIST, SVHN, and Tiny ImageNet. Our experimental results show consistent performance gains over state-of-the-art convolutional and transformer-based models. Notably, CoSwin achieves improvements of 2.17% on CIFAR-10, 4.92% on CIFAR-100, 0.10% on MNIST, 0.26% on SVHN, and 4.47% on Tiny ImageNet over the baseline Swin Transformer. These improvements underscore the effectiveness of local-global feature fusion in enhancing the generalization and robustness of transformers for small-scale vision. Code and pretrained weights available at https://github.com/puskal-khadka/coswin
Problem

Research questions and friction points this paper is trying to address.

Addresses local feature extraction deficiency in small-scale vision transformers
Enhances hierarchical attention with convolutional learning for spatial details
Improves generalization and robustness in small dataset image classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Convolution enhanced hierarchical shifted window attention
Integrates local feature enhancement into attention blocks
Simultaneously captures spatial details and global structure
🔎 Similar Papers
No similar papers found.
P
Puskal Khadka
AI Research Lab, Department of Computer Science, University of South Dakota, Vermillion, SD, USA
Rodrigue Rizk
Rodrigue Rizk
University of South Dakota, University of Louisiana at Lafayette, Notre Dame University
AIReinforcement LearningQuantum ComputingPhysics-Inspired ComputingHealthcare
L
Longwei Wang
AI Research Lab, Department of Computer Science, University of South Dakota, Vermillion, SD, USA
K
KC Santosh
AI Research Lab, Department of Computer Science, University of South Dakota, Vermillion, SD, USA