vGamba: Attentive State Space Bottleneck for efficient Long-range Dependencies in Visual Recognition

📅 2025-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the efficiency–accuracy trade-off in modeling long-range dependencies for visual recognition, this paper proposes vGamba, a lightweight and efficient visual backbone. To overcome the limited receptive field of CNNs and the high computational cost of Vision Transformers (ViTs), we introduce the first adaptation of State Space Models (SSMs) to 2D vision architectures, designing the Gamba bottleneck block—a hybrid module that tightly couples multi-head self-attention (MHSA) with a gated feature fusion mechanism. This SSM-Attention hybrid architecture preserves global contextual modeling capability while substantially reducing computational complexity. Experiments demonstrate that vGamba consistently outperforms state-of-the-art models on image classification, object detection, and semantic segmentation. At comparable accuracy levels, it achieves significant reductions in FLOPs and memory footprint, delivering superior accuracy–efficiency trade-offs across diverse vision tasks.

Technology Category

Application Category

📝 Abstract
Capturing long-range dependencies efficiently is essential for visual recognition tasks, yet existing methods face limitations. Convolutional neural networks (CNNs) struggle with restricted receptive fields, while Vision Transformers (ViTs) achieve global context and long-range modeling at a high computational cost. State-space models (SSMs) offer an alternative, but their application in vision remains underexplored. This work introduces vGamba, a hybrid vision backbone that integrates SSMs with attention mechanisms to enhance efficiency and expressiveness. At its core, the Gamba bottleneck block that includes, Gamba Cell, an adaptation of Mamba for 2D spatial structures, alongside a Multi-Head Self-Attention (MHSA) mechanism and a Gated Fusion Module for effective feature representation. The interplay of these components ensures that vGamba leverages the low computational demands of SSMs while maintaining the accuracy of attention mechanisms for modeling long-range dependencies in vision tasks. Additionally, the Fusion module enables seamless interaction between these components. Extensive experiments on classification, detection, and segmentation tasks demonstrate that vGamba achieves a superior trade-off between accuracy and computational efficiency, outperforming several existing models.
Problem

Research questions and friction points this paper is trying to address.

Efficiently capturing long-range dependencies in visual recognition
Balancing computational cost and accuracy in vision models
Integrating state-space models with attention for better performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid SSMs with attention for efficiency
Gamba Cell adapts Mamba for 2D vision
Gated Fusion integrates SSMs and MHSA
🔎 Similar Papers
No similar papers found.