🤖 AI Summary
Modeling long-range 2D dependencies in gigapixel whole-slide image (WSI) classification remains challenging: existing Transformers suffer from quadratic complexity and spatial distortion due to 1D tokenization, while conventional 2D state space models (SSMs) incur prohibitive computational overhead. This paper introduces the first hardware-aware 2D selective SSM, which natively processes images in 2D via raster-scan order, enables efficient state propagation across rows and columns, and leverages custom GPU kernels for optimal throughput. The method preserves linear complexity and high parallelism while faithfully capturing intrinsic 2D continuity. Extending the Mamba architecture, it synergizes with VMamba for hierarchical visual representation learning. Evaluated on ten public WSI datasets, our approach achieves +2.48% AUC and +3.11% F1; it also improves mIoU by 0.5–0.7 on ADE20k and top-1 accuracy by 0.2% on ImageNet-1K.
📝 Abstract
Efficiently modeling large 2D contexts is essential for various fields including Giga-Pixel Whole Slide Imaging (WSI) and remote sensing. Transformer-based models offer high parallelism but face challenges due to their quadratic complexity for handling long sequences. Recently, Mamba introduced a selective State Space Model (SSM) with linear complexity and high parallelism, enabling effective and efficient modeling of wide context in 1D sequences. However, extending Mamba to vision tasks, which inherently involve 2D structures, results in spatial discrepancies due to the limitations of 1D sequence processing. On the other hand, current 2D SSMs inherently model 2D structures but they suffer from prohibitively slow computation due to the lack of efficient parallel algorithms. In this work, we propose 2DMamba, a novel 2D selective SSM framework that incorporates the 2D spatial structure of images into Mamba, with a highly optimized hardware-aware operator, adopting both spatial continuity and computational efficiency. We validate the versatility of our approach on both WSIs and natural images. Extensive experiments on 10 public datasets for WSI classification and survival analysis show that 2DMamba improves up to 2.48% in AUC, 3.11% in F1 score, 2.47% in accuracy and 5.52% in C-index. Additionally, integrating our method with VMamba for natural imaging yields 0.5 to 0.7 improvements in mIoU on the ADE20k semantic segmentation dataset, and 0.2% accuracy improvement on ImageNet-1K classification dataset. Our code is available at https://github.com/AtlasAnalyticsLab/2DMamba.