A2Mamba: Attention-augmented State Space Models for Visual Recognition

📅 2025-07-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing hybrid Transformer-Mamba architectures merely stack layers of the two paradigms without enabling cross-module interaction. To address this limitation, we propose A2Mamba—a novel unified architecture integrating Transformers and Mamba via the Attention-Augmented State Space Model (A2SSM). A2SSM embeds multi-scale attention into the SSM’s hidden states, enabling joint modeling of local details and global context. Crucially, it employs dynamic spatial aggregation to jointly capture 2D structural dependencies and sequential dynamics within a single coherent framework, substantially enhancing representational capacity. Evaluated on ImageNet-1K, A2Mamba-L achieves 86.1% top-1 accuracy—surpassing ConvNets, pure Transformers, and vanilla Mamba baselines across diverse vision tasks including semantic segmentation and object detection. Moreover, it attains these gains with fewer parameters and higher inference efficiency.

Technology Category

Application Category

📝 Abstract
Transformers and Mamba, initially invented for natural language processing, have inspired backbone architectures for visual recognition. Recent studies integrated Local Attention Transformers with Mamba to capture both local details and global contexts. Despite competitive performance, these methods are limited to simple stacking of Transformer and Mamba layers without any interaction mechanism between them. Thus, deep integration between Transformer and Mamba layers remains an open problem. We address this problem by proposing A2Mamba, a powerful Transformer-Mamba hybrid network architecture, featuring a new token mixer termed Multi-scale Attention-augmented State Space Model (MASS), where multi-scale attention maps are integrated into an attention-augmented SSM (A2SSM). A key step of A2SSM performs a variant of cross-attention by spatially aggregating the SSM's hidden states using the multi-scale attention maps, which enhances spatial dependencies pertaining to a two-dimensional space while improving the dynamic modeling capabilities of SSMs. Our A2Mamba outperforms all previous ConvNet-, Transformer-, and Mamba-based architectures in visual recognition tasks. For instance, A2Mamba-L achieves an impressive 86.1% top-1 accuracy on ImageNet-1K. In semantic segmentation, A2Mamba-B exceeds CAFormer-S36 by 2.5% in mIoU, while exhibiting higher efficiency. In object detection and instance segmentation with Cascade Mask R-CNN, A2Mamba-S surpasses MambaVision-B by 1.2%/0.9% in AP^b/AP^m, while having 40% less parameters. Code is publicly available at https://github.com/LMMMEng/A2Mamba.
Problem

Research questions and friction points this paper is trying to address.

Deep integration between Transformer and Mamba layers is lacking
Existing methods lack interaction between Transformer and Mamba layers
Enhancing spatial dependencies in SSMs for visual recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid Transformer-Mamba network architecture
Multi-scale Attention-augmented State Space Model
Cross-attention enhanced spatial dependencies
🔎 Similar Papers
No similar papers found.