🤖 AI Summary
Vision Mamba suffers from high-norm token artifacts in low-information background regions, severely degrading feature quality. To address this, we introduce learnable registers into the Mamba architecture—the first such adaptation—designed to align with its unidirectional scanning property via sequential uniform insertion and register recycling. Our method effectively suppresses background artifacts and enhances semantic focus. We construct the first high-performance Vision Mamba variant with 341M parameters: on ImageNet, its Base and Large variants achieve 83.0% and 83.6% top-1 accuracy, respectively, rising to 84.5% after fine-tuning—a 1.2% improvement over the baseline. Moreover, consistent gains are observed across downstream tasks, including semantic segmentation, validating strong generalization. This work establishes a principled approach to stabilizing token representations in state-space models for vision, advancing both architectural design and practical performance.
📝 Abstract
Similar to Vision Transformers, this paper identifies artifacts also present within the feature maps of Vision Mamba. These artifacts, corresponding to high-norm tokens emerging in low-information background areas of images, appear much more severe in Vision Mamba -- they exist prevalently even with the tiny-sized model and activate extensively across background regions. To mitigate this issue, we follow the prior solution of introducing register tokens into Vision Mamba. To better cope with Mamba blocks' uni-directional inference paradigm, two key modifications are introduced: 1) evenly inserting registers throughout the input token sequence, and 2) recycling registers for final decision predictions. We term this new architecture Mamba-R. Qualitative observations suggest, compared to vanilla Vision Mamba, Mamba-R's feature maps appear cleaner and more focused on semantically meaningful regions. Quantitatively, Mamba-R attains stronger performance and scales better. For example, on the ImageNet benchmark, our base-size Mamba-R attains 83.0% accuracy, significantly outperforming Vim-B's 81.8%; furthermore, we provide the first successful scaling to the large model size (i.e., with 341M parameters), attaining a competitive accuracy of 83.6% (84.5% if finetuned with 384x384 inputs). Additional validation on the downstream semantic segmentation task also supports Mamba-R's efficacy. Code is available at https://github.com/wangf3014/Mamba-Reg.